Dispatches from Brookline: Home Schooling and Social Distancing II

In a previous post, I described how my wife Nell, our two daughters and I were coping with social distancing and the closure of the public schools in Brookline, Massachusetts until at least April 3, 2020. Other than staying inside as much as possible, we converted our dining room into a functioning classroom complete with workbooks, flip charts and a very popular white board.

**********

On Thursday, March 19, 2020, I came downstairs to find this in the “classroom.”

March 19

Unlike the previous day, our daughters had a much smoother morning. Nell set up the video game Just Dance on the big screen HD television in our living room, which was particularly good for our 6th-grade daughter, who requires a great deal of regular physical activity. Our 4th-grade daughter would generally prefer to sit quietly in a darkened bedroom with an iPad. Both daughters have also made extensive use of FaceTime to stay in touch with their many friends.

When “Dad Academy” began, our older daughter read aloud the Preamble to the Constitution of the United States of America (“Constitution”). We then proceeded to work through much of Article I, establishing the nature and role of the House of Representatives (“House”) and the Senate. After a brief foray into Article II and the qualifications for the presidency, however, it was clear their doodling minds were wandering.

As a result, I shifted gears and walked them through the scenario I detail below: what would happen as of 12:01 pm on January 20, 2021 if there were no November 2020 elections for the House, Senate, vice president and president. I had tweeted my initial thoughts on Wednesday, but as I sketched it out—much to their delight, I am pleased to report—I realized I had forgotten a crucial element. After a quick check of this year’s Senate elections, I made the appropriate revisions on Twitter and, more importantly, the white board.

This quickly devolved into both daughters sketching out their own mind-bogglingly grin doomsday scenarios on the white board, all of which seemed to end up with 50,000 or 100,000 survivors living on Antarctica and dividing up only whatever food they could carry with them. Hey, they were using their imaginations, thinking about geography and doing arithmetic, so I was not complaining.

After an hour-long break, we reconvened to resume learning about basic statistics. After quickly reviewing frequencies, range, mode, median, mean and a few statistical distributions, I decided to change my lesson plan again. Rather than begin to discuss relationships between variables, I put my doctorate in epidemiology to good use and explained “sensitivity” and “specificity” of testing for some condition like, say, the novel coronavirus. They quickly grasped the underlying idea:

  1. Persons who have the condition AND test positive are True Positives
  2. Persons who do not have the condition AND test negative are True Negatives
  3. Persons who do not have the condition AND test positive are False Positives
  4. Persons who do have the condition AND test negative are False Negatives

If you divide True Positives by the sum of True Positives and False Negatives you get sensitivity: the percentage of persons who truly have the condition who test positive for it.

If you divide the number of True Negatives by the sum of True Negatives and False Positives you get specificity: the percentage of persons who truly do not have the condition who test negative for it.

It is nearly impossible to have a test be both 100% sensitive AND 100% specific because of the likely gray area between an extremely tight case definition (e.g., you must meet all 10 criteria)—which gives you higher sensitivity—and a relatively looser definition (e.g., you only need to meet five out of 10 criteria)—which gives you higher specificity. For a host of reasons I will not review here, mostly related to accuracy of categorization, epidemiologists generally prefer to have the specificity of a test be as close to 100% as possible, even at the risk of lower (by which I mean, say, 90% instead of 95%) sensitivity.

Think of it this way, though: the lower the specificity of the test, the more False Positives you have. And the more False Positives you have, the more people you have being treated for the condition at the expense of other people who actually need to be treated. Moreover, given that most conditions being tested are fairly rare, there will always be many fewer False Negatives than False Positives; one exception, though, would be if you only test persons you are already very certain have the condition, which bring the number of False Negatives much closer to the number of False Positives.

And with that—and a review of some of our older daughter’s algebra problems—school was out for the day.

**********

Ohio was supposed to hold its 2020 Democratic presidential primary on March 17. It was postponed until June 2, however, due to concerns over spreading the novel coronavirus. Five other states have done the same thing, meanwhile, leading to speculation President Donald J. Trump may attempt to postpone—or outright cancel—the November 2020 federal elections (Congress, vice president, president).

Leaving aside whether such an action is even feasible—for one thing, while under Article I, Section 5, the House and Senate have broad authority over the timing of elections to their respective houses, those elections are actually administered by each individual state. The same is true for elections for vice president and president—and that is before considering that the Electoral College essentially mandates 51 distinct elections, one within each state and the District of Columbia.

But let us assume, as a kind of thought experiment, it actually would be possible to delay these elections. So long as the presidential and vice-presidential elections were held long enough before December 13, 2020—the first Monday after the second Wednesday in December, when electors are required to meet in their respective states to cast their presidential ballots—there would be more than enough time to swear in a president the following January 20.

However, if these elections simply never occur…well, this is where two sections of the Constitution and the Presidential Succession Act of 1947 (PSA) come into play.

  • Under Amendment XX, Section 1: “The terms of the President and the Vice President shall end at noon on the 20th day of January,
  • “…and the terms of Senators and Representatives at noon on the 3d day of January.”
  • Under the PSA, the line of succession to the president is the vice president, followed by the Speaker of the House, the President Pro Tempore of the Senate—the longest-serving member of the Senate of the majority party—and members the Cabinet, beginning with the Secretary of State.

In other words, barring a non-starter Constitutional amendment, an Act of Congress (hard to see Democrats going along with this) or a very-unlikely ruling by the Supreme Court (the Constitution explicitly states that as as of 12:01 pm on January 20, 2021, Trump and Michael R. Pence would no longer be the president and vice president of the United States, respectively.

And for the previous 17 days, there would also be no Speaker of the House because the term of every one of the 435 members of the House would have ended at noon on January 3, 2021.

I note at this point that Amendment XX, Section 1 ends with “the terms of their successors shall then begin,” so it is just barely possible an argument could be made the terms of the president, vice president, House members and Senators would not end because there are no successors. Without a successor, there are no occupants of those offices, effectively shutting down the federal government.

Here is the counter-argument, however, and where things get really interesting.

There would still be a United States Senate, albeit one 35% smaller, at 12:01 pm on January 3, 2021, meaning there would still be a President Pro Tempore to assume the office of the presidency, and who would then nominate someone to be vice president pending Senate approval.

There would still be a Senate because only 35 of the 100 Senators are reaching the end of their terms this year.[1] Fully 65 Senators will still be serving at that time: 35 Democrats (including two Independents, Bernie Sanders of Vermont and Angus King of Maine, who caucus with the Democrats) and 30 Republicans.

That is right: rather than the current Senate, which has a 53-47 Republican majority, this “abridged” Senate would have a 35-30 Democratic majority. And the longest-serving Democratic Senator—who is not up for reelection in 2020—is Patrick J. Leahy of Vermont, who was first elected in 1974!

So…Leahy would absolutely become the 46th president of the United States, sworn somewhere by Chief Justice John J. Roberts?

Well…not so fast.

And that is because of what I had forgotten on Wednesday: under Amendment XVII, governors are empowered to appoint a replacement for a Senator who leaves office before the end of her/his term—just about always a member of the same party as the governor.

In this scenario, these governors immediately appoint replacement Senators as soon as those 35 Senate terms expire at noon on January 3, 2021…and they are sworn in immediately. Traditionally, the vice president swears in each new Senator, so that may be the fly in the ointment here. Presumably, though, in this unusual circumstance Chief Justice Roberts could swear in all the appointed Senators at one time, somewhere in Washington, DC.

As for the governors themselves:

  • In the 12 states where a Democratic Senate term is ending there are
    • 8 Democratic governors
    • 2 Republican governors
    • 1 Democratic governor up for reelection in Delaware
    • 1 Republican governor up for reelection in New Hampshire
  • In the 22 states where a Republican Senate term is ending (with two in Georgia) there are
    • 15 Republican governors filling 16 seats
    • 6 Democratic governors
    • 1 Democratic governor not seeking reelection in Montana

Excluding the three states where a gubernatorial election is being held (or not…as our younger daughter pointed out, why would there be elections for governor if all the federal elections were postponed?), the new Senate would now include:

  • 35 + 8 + 6 = 49 Democrats
  • 30 + 2 + 16 = 48 Republicans

This is still a bare 49-48 Democratic majority, making Leahy the 46th president.

IF gubernatorial elections are held in Delaware and New Hampshire this November, though, it is very likely the incumbent wins both races, which adds one new Democratic and one new Republican Senator, for a bare 50-49 Democratic majority…and President Leahy.

That leaves it all up to Montana.

IF there is a Montana gubernatorial election this November, the Republican nominee would likely be favored to win. In that case, we would wind up with a 50-50 tie in the Senate. And with no vice president to break the tie, it is not clear whether Leahy or Republican Charles R. Grassley of Iowa, who was first elected in 1980. Of course, if a Democrat were elected the next governor of Montana, that would result in a 51-49 Democratic Senate majority…and President Leahy.

Perhaps the nod still goes to Leahy in the case of a 50-50 Senate split, as the longest-serving Senator overall. Perhaps there is something like a coin flip. Or maybe these two men—who have served together in the United States Senate for 40 years and are around 80 years of age—decide to serve jointly, with one as president and one as vice president.

The bottom line, though, is that it is far more likely than not that if there are no federal elections this November, Democratic Senator Patrick Joseph Leahy of Vermont would be sworn in at 12:01 pm EDT on January 20, 2021 as the 46th president of the United States.

Until next time…please be safe and sensible out there…

[1] Including Republican Kelly Loeffler, appointed to replace retiring Republican Johnny Isakson in December 2019.

December 2019 update: Democratic presidential nomination and general election polling

With the sixth Democratic presidential nomination debate set for December 19, 2019 in Los Angeles, California, here is an updated assessment of the relative position of the now-15 declared candidates. Since the previous update, four candidates exited the race: Miramar, Florida Mayor Wayne Messam on November 20, former United States House of Representatives Member (“Representative”) Joe Sestak of Pennsylvania on December 1, Montana Governor Steve Bullock on December 2, and, rather surprisingly given that she had qualified for the sixth debate, United States Senator (“Senator”) from California Kamala Harris on December 3. The 13 candidates who have abandoned their quest to be the 2020 Democratic presidential nominee each exited with grace, class and dignity; I commend them for it.

To learn how I calculate the value I assign to each candidate, NSW-WAPA (national-and-state-weighted weighted-adjusted polling average), please see here;[1] for recent modifications, please see here.

And, of course, here is the December 2019 lighthouse photograph in my Down East 2019 Maine Lighthouses wall calendar.

Dec 2019 lighthouse.JPG

**********

Table 1 below aggregates data from all national and state-level polls publicly released since January 1, 2019(as of 1:00 am EST on December 19, 2019), including:

  • 284 national polls (including 50 weekly Morning Consult tracking polls and 30 weekly YouGov tracking polls)
  • 38 Iowa caucuses polls
  • 39 New Hampshire primary polls
  • 12 Nevada caucuses polls
  • 33 South Carolina primary polls
  • 73 Super Tuesday polls[2]
  • 78 polls from 20 other states.[3]

There are now 558 total polls, up from 488 last month.

Table 1: National-and-state-weighted WAPA for declared 2020 Democratic presidential nomination candidates

Candidate National IA NH NV SC Post-SC NSW-WAPA
Biden 28.1 19.9 20.8 27.3 36.3 27.2 25.7
Warren 16.0 18.3 17.6 18.8 12.6 18.2 17.0
Sanders 16.5 15.5 17.5 19.0 11.7 16.2 16.0
Buttigieg 6.2 15.1 11.1 6.1 4.5 6.4 9.1
Steyer 0.5 2.1 1.5 3.1 3.1 0.3 2.1
Yang 2.0 2.0 2.6 2.7 1.4 1.4 2.1
Klobuchar 1.5 4.0 2.0 1.3 0.9 1.3 2.1
Booker 2.1 2.1 1.7 1.5 2.8 1.5 2.0
Gabbard 1.0 1.4 2.9 1.1 0.9 0.9 1.6
Castro 0.9 0.5 0.2 1.0 0.2 1.1 0.55
Williamson 0.3 0.1 0.4 0.3 0.5 0.2 0.30
Delaney 0.3 0.5 0.4 0.00 0.3 0.2 0.28
Bennet 0.3 0.3 0.2 0.4 0.2 0.3 0.28
Bloomberg 0.7 0.2 0.2 n/a 0.3 0.3 0.21
Patrick 0.05 0.00 0.05 n/a 0.1 0.1 0.04
DK/Other 23.6 18.0 20.8 17.4 24.2 24.4 20.6

The race continues to follow the same pattern. Former Vice President Joe Biden remains the nominal frontrunner (25.7, down from 26.2), primarily because of his 23.7-percentage-point (“point”) lead in South Carolina, essentially unchanged from 24.0 last month. However, he is less strong in Iowa and New Hampshire, where the two candidates battling for second place—Massachusetts United States Senator (“Senator”) Elizabeth Warren (17.0, down from 17.3) and Vermont Senator Bernie Sanders (16.0, up from 15.8)—are much closer to first place. And this more-inclusive version of NSW-WAPA overstates the gap between Biden and Warren; only examining polls conducted entirely after June 26, 2019, when the first round of Democratic presidential debates ended, Biden drops to 24.6 and Warren rises to 18.2; Sanders is at 15.8. Rounding out the Big Four, overall and in the four earliest states, is South Bend, Indiana Mayor Pete Buttigieg (9.1—up from 8.1, and 7.1 the month before). These four candidates account for more than two-thirds (68.0%) of declared Democratic voter preferences.

Looking only at Iowa and New Hampshire, meanwhile, shows an even tighter race, especially when only post-first-debate polls are considered. Using these more recent polls, Iowa is effectively a four-way tie, with Warren at 19.7, Biden at 18.9, Buttigieg at 16.0 and Sanders at 15.0. There is a similar scrum in New Hampshire, with Biden at 19.5, Warren at 19.4, Sanders at 17.0 and Buttigieg at 11.4.

In the next tier are five candidates with NSW-WAPA between 1.6 and 2.1 who are running out of chances to rise into the top four: billionaire activist Tom Steyer, entrepreneur Andrew Yang, Minnesota Senator Amy Klobuchar and New Jersey Senator Cory Booker—essentially tied for 5th place—followed by Hawaii Representative Tulsi Gabbard. Other than Booker, these candidates rose in the last month, particularly in the early contests. However, only Steyer, Yang and Klobuchar will be joining the Big Four on Thursday’s debate stage, despite protests by Booker and former Secretary of Housing and Urban Development Julián Castro, who remains mired around 0.6. The top nine candidates total more than three-quarters (77.7%) of declared Democratic voter preferences.

While Castro and the remaining six candidates divide just 1.7 between them, nobody else seems close to ending their campaign soon. Indeed, Castro missed the December 9 deadline to run in 2020 against Texas Senator John Cornyn. Meanwhile, the upsurge in NSW-WAPA for “Don’t Know/Other” reflects lingering support for Harris.

Returning to the debates, seven pollsters—six nationally[4] and one in South Carolina[5]–conducted polls of the 2020 Democratic presidential nomination both before (but after the October 2019 debate) and after the November 2019 debate. Simple average differences in polling percentage (South Carolina poll results weighted four times national results) show measurable gains for Buttigieg and Steyer (+1.7 points each), Gabbard (+0.9) and Booker (+0.6), as well as measurable declines for Biden (-2.9) and, especially, Warren (-4.1). Warren’s decline was even sharper after weighting averages by pollster quality (-5.9) and number of days between polls (-7.9): her decline was steepest in higher-quality polls conducted farther apart in time. This decline is reflected in the decline in Warren’s NSW-WAPA from 19.1 last month to 18.2 now using only polls conducted after the first Democratic candidate debate. By contrast, “Don’t Know/Other,” Castro, Sanders, Booker and Klobuchar saw higher increases in support, albeit still small, with higher-quality polls conducted farther apart in time.

**********

On December 13, 2019, FiveThirtyEight unveiled its own 2020 Democratic presidential nomination polling aggregations. We use different aggregation methods, so while values we estimate for each candidate are broadly similar, there are clear differences, most notably in the relative standings of Buttigieg and Warren in Iowa and New Hampshire. I encourage you to compare our methods and results, especially since I base much of my own methods on their example.

Here are four key areas where our aggregation methods differ.

First and foremost, I combine state and national polling averages into a single overall value: NSW-WAPA; FiveThirtyEight does not. Similarly, when a state has not been polled for a while, FiveThirtyEight adjusts each candidate’s standing in that state by the change in their national standing, or “secular trend.” I do not do this. The first four states are polled often enough to make such adjustment unnecessary there, and I use time-and-quality weighted averages of a) the 10 Super Tuesday states and b) all subsequent states, mitigating the need for secular trend adjustment. Moreover, while I think such adjustment is necessary for general election polling, particularly the closer the election is, I am skeptical state-level primary polling moves in tandem with national polling, if only because there is no “national presidential nomination primary” to assess.

That secular trend is apparent in how much worse Warren is faring in Iowa and New Hampshire in FiveThirtyEight’s models compared to mine, likely due to her polling decline since the last debate, as well as the corresponding stronger position of Buttigieg in those states.

Second, the way I weight elapsed time since a poll was conducted compared to the way FiveThirtyEight does means NSW-WAPA is slower to detect sustained movement in a candidate’s standing. Still, it caught the major movements: the decline in support for Biden and former Texas Representative Beto O’Rourke since the start of the debates, the rise of Harris after the first debate and her subsequent decline, and the slow steady ascent of first Warren (with a recent decline) then Buttigieg.

Third, FiveThirtyEight adjusts for the tendency of some pollsters to release more favorable results for some candidates, on average, relative to others. I do not, simply because I had not performed or seen the calculations. Also, adjusting for this mathematical “bias” in my general election polling aggregates rarely alters final aggregates very much. However, if I can obtain the bias estimates FiveThirtyEight uses, I may consider using them.

Finally, FiveThirtyEight assigns a higher weight to polls with a larger sample size; I do not weight by sample size at all. For one thing, this would give inordinate influence to Morning Consult tracking polls, whose average sample size in 2019 has been 14,562—while their pollster rating is B/C (which I code 2.5/4.3 = .581). There is in fact a slight inverse relationship between average sample size and pollster quality: r=-0.20 for 24 polling agencies who have conducted two or more national Democratic nomination contest polls in 2019.[6] Moreover—and here my epidemiological training comes into play—increasing sample size makes an estimate more precise (i.e., smaller margin of error) but does not reduce any systematic bias; the latter is reduced through the sort of adjustment discussed in the previous paragraph. Simply put, I see no valid methodological reason for giving more weight to polls with higher sample sizes.

**********

There has been a slight pro-Trump trend in recent polls of hypothetical matchups between potential 2020 Democratic presidential nominees and President Trump I may address in a later post. For now, though, here are the averages: Biden would beat Trump nationally by 7.6 points, Warren by 3.1 points and Sanders by 4.9 points, while Buttigieg would lose by 0.3 points and Booker by 0.7 points[7]; Bloomberg, based on eight polls—seven released in the last two months, would win by 1.2 points. The other eight candidates for whom I have matchup data would lose by between 2.9 (Klobuchar) and 9.3 (spiritual advisor Marianne Williamson) points, although these numbers are misleading, as they are primarily based upon data from pollster Harris X, who tend not to push undecided voters to choose, making for unusual polling margins.

Weighted by a rough estimate of the likelihood of winning the nomination (NSW-WAPA/.794), the 2020 Democratic nominee would beat Trump by 3.4 points, broadly in line with the median Democratic presidential margin (+3.0) in the previous six presidential elections, which include three elections with an incumbent seeking reelection and three elections with no incumbent. Excluding Biden and Sanders, however, decreases the margin to -0.2 points, with the caveat from the preceding paragraph.

Comparing available state-level results,[8] which decide presidential elections via the Electoral College, to my partisan-lean measure 3W-RDM implies Democrats would win the national popular vote by between 3.7 (excluding Biden and Sanders) and 5.8 points. Most encouraging to Democrats should be polls from North Carolina (R+6.0), Georgia (R+9.6), Arizona (R+9.7) and Texas (R+15.3), which show Democrats either even (Georgia) or within five points of Trump; on average, they imply a national Democratic lead of 8-9 points, confirming strong opportunities for Democrats in the southeast and southwest.

By contrast, polls from Democratic-leaning Nevada (D+2.0) show Democrats anywhere from 0.7 points ahead to 2.1 points behind, implying Democrats would lose nationwide by 1.3-4.1 points. And while Democrats are 2.1-5.5 points ahead in the swing state of Michigan, which Trump won by 0.16 points in 2016, their position is…wobbly…in Florida (R+3.4), Pennsylvania (R+0.4) and Wisconsin (D+0.7), all of which Trump won narrowly in 2016.

On balance, however, Democrats remain extremely competitive in a wide range of swing and Republican-leaning states against an incumbent president, with the most likely nominees—Biden, Warren, Sanders, Buttigieg—beating him by a combined 4.8 points.

Until next time…

[1] Essentially, polls are weighted within nation/state by days to  nominating contest and pollster quality to form an area-specific average, then a weighted average is taken across Iowa (weight=5), New Hampshire (5), Nevada (4), South Carolina (4), time-weighted average of subsequent contests (2) and nationwide (1). Within subsequent contests, I weight the 10 March 3, 2020 “Super Tuesday” states (Alabama, California, Colorado, Massachusetts, Minnesota, North Carolina, Oklahoma, Tennessee, Texas, Virginia) twice subsequent contests. As of this writing, I have at least one poll from (in chronological order) Maine, Michigan, Mississippi, Missouri, Ohio, Washington, Arizona, Florida, Illinois, Georgia, Wyoming, Wisconsin, Delaware, Maryland, New York, Pennsylvania, Indiana, Oregon, Montana and New Jersey.

[2] Primarily California (31), Texas (19) and North Carolina (8)

[3] Primarily Wisconsin (13), Florida (12), Pennsylvania (9) and Michigan (8)—not coincidentally, the four states President Donald J. Trump won in 2016 by the narrowest margins.

[4] Morning Consult Tracking, Monmouth University, Fox News, YouGov, IBD/TIPP, CNN/SSRS, Quinnipiac University

[5] YouGov

[6] The correlation increases to -0.43 if you exclude Morning Consult.

[7] Although this matchup has not been assessed since late September.

[8] From 27 states: Pennsylvania, New Hampshire, Wisconsin, Michigan, North Carolina, Texas, Iowa, Arizona, South Carolina, Minnesota, Nevada, Massachusetts, Florida, New York, Kentucky, Maine, Ohio, North Dakota, California, Alaska, Washington, Colorado, Missouri, Utah, Virginia, Montana, Connecticut, Georgia.

Interrogating memory: The Beatles, wax museums and a diner mystery solved

To the extent my writing over the last three years has a theme (or perhaps even a brand), it is what I call interrogating memory.

At one level, this is just a fancy term for “fact-checking,” as in looking through my elementary school report cards (I am missing the one for third grade[1]) to confirm my fourth-grade teacher was named Ms. Goldman, only to discover she was my fifth-grade teacher and her name was “R. Goldberg.”

Quick story.

On the first day of fifth grade at Lynnewood Elementary School, my new teacher called me up to her desk. Ms. Goldberg, an attractive woman with an unwavering platinum blonde permanent, was curious about my father, whose name she had seen was David Louis Berger. We quickly established (most likely through his age and being raised in West Philadelphia) they had been in the same confirmation class at Congregation Beth El in 1951. It was also clear from the way she spoke about him (my aunt once wrote me, “He really was lovable you know”) she had a serious crush on him. I do not recall how I reacted, or what my father said when I told him.

Still, knowing it was fifth, not fourth, grade and that her surname was Goldberg, not Goldman, does not materially alter the story: my teacher had known and liked my father when they were teenagers.

The thing is, however, I pulled out those report cards in the process of reassessing an entirely different memory, one that better exemplifies the complexity of interrogating memory.

As a child and young teen, I hated The Beatles (or, at least, refused to succumb to the pressure to love them). And until a few weeks ago, I believed this disdain stemmed from my active resistance to being told what to like and what not to like. My attitude from a very young age was that I will decide for myself what I like and do not like, thank you very much.

My proof, other than my own memory?

I was certain that mixed in with otherwise glowing comments from my elementary school teachers on my report cards was a common phrase along the lines of “does not like to follow directions.”

But when I pulled out my five surviving report cards from Lynnewood, this sentiment was far less ubiquitous than I had remembered. Mrs. Virginia Hoeveler did begin her extensive (and humbly flattering) comments, dated June 13, 1973, by noting I initially had “difficulty conforming to a classroom situation,” though I quickly adjusted. She also added a postscript: “Matt is quite the ‘individual – he likes to do his ‘own thing.’ “

Five months later (November 7, 1973), Ms. C. Edwards—who broke the heart of every boy in my second-grade class when she became Mrs. C. Stevenson at the end of the school year (many of us attended the wedding, sitting in a mezzanine area of the church, overlooking the ceremony, stage left)—wrote, “Matt sometimes gets carried away with his intelligence. He seems to feel that he doesn’t need to follow directions.”

Ouch.

Still, as of June 1, 1974, I had “become much more social with [my] peers.” Good to know I was ceasing to be a curmudgeon at seven years old.

But…that is it. I have no third grade report card, neither Miss Nichols nor R. Goldberg wrote more than a token sentence or two, and Mr. Bianco (a good-looking man who wore platform shoes and was smitten with my mother) merely noted I would have had an “O” (Outstanding) instead of an “S” (Satisfactory) in Social Studies but for too many missed assignments.

Oh.

The point is, my memory was not, strictly speaking, incorrect; there were comments along the lines of “does not like to follow directions.” It was just that they were confined to first and second grades, when I was apparently still adjusting socially and academically to a formal classroom environment.

Here is the kicker, though. Even before I pulled out those report cards, I had already concluded my aversion to structured guidance was not why I had hated The Beatles (which I no longer do; quite the contrary, in fact[2]). Or, at least, it was not the only reason.

Just bear with me while I wax rhapsodic about Atlantic City, New Jersey.

I spent the summers of 1974 and 1975 living with my mother and our dog—a Keeshond named Luvey—in Penthouse A (really, just one of two slightly larger rooms with two queen beds and a walk-in closet sharing a small semi-circular concrete balcony overlooking the pool) of the Strand Motel in Atlantic City. On weekends, my father would drive the roughly 80 miles from our home in Havertown, Pennsylvania (just west of Philadelphia) to join us.

Luvey in Atlantic City August 1974 2.jpg

The Strand Motel, which sat between the Boardwalk and Pacific Avenues, and between Providence and Boston Avenues, was knocked down around 1979 as part of the construction of the Golden Nugget Casino (which, after many name changes, closed in 2014). I am reasonably certain this photograph was taken in the lounge directly below the penthouses one of those two summers; my father is the silver-haired man in the blue jacket sitting at the bar, while the left side of my mother’s face is just visible on the right (her natural red hair was back).

Scan0011.jpg

Those two summers, I spent my days wandering up and down Pacific Avenue (either on foot, or riding a jitney for 35 cents) and the Boardwalk. By myself, at the ages of seven and eight, that is; I cannot imagine that happening today. I especially loved going into the lobby of every motel and hotel along the roughly three miles of roads/Boardwalk in my purview to collect one of each pamphlet available in the large wooden racks there. During the winter, I would dump them onto my parents’ bed and rummage through them, wishing I was back in Atlantic City.

One of those pamphlets was actually a red-covered brochure for Louis Tussaud’s Wax Museum, then located at 1238 Boardwalk (yes, the Boardwalk is considered a road for mailing purposes), roughly halfway between North Carolina and South Carolina Avenues.

I do not know why I suddenly recalled this wax museum a few weeks ago (which was opened by Madame Tussaud’s somewhat less-talented great-grandson). Perhaps it was researching my book, and thinking about how we stopped summering down the shore (as those of us raised near Philadelphia say) in 1976, just before the casinos started being built, effectively ending “my” Atlantic City. Along those lines, I have reflected a great deal this summer on how much my wife Nell and our daughters love spending much of the summer on Martha’s Vineyard, and how much, frankly, I do not. And I have concluded no longer spending summers in Atlantic City, even as it was inexorably changing (for the worst, in my opinion)[3], was a deeply painful occurrence I have yet fully to process. But, the result is a silly jealousy of Nell’s childhood (and current) summer home.

Or, Louis Tussaud’s Wax Museum came to mind for no other reason than the 1953 Vincent Price vehicle House of Wax was recently on TCM OnDemand (I did not get a chance to watch it).

Regardless, what I specifically recalled about that slightly tacky museum was that one of the first tableaus you saw when you entered from the Boardwalk was of The Beatles circa 1964. Walking by the four wax figures, I would hear “I Want to Hold Your Hand” playing; perhaps songs like “She Loves You” played as well. In fact, now that I interrogate that memory, the point of the tableau may have been to reproduce their historic February 9, 1964 appearance on The Ed Sullivan Show.

I could not tell you what other tableaus I saw in Louis Tussaud’s because, frankly, the only other thing I clearly remember is the Chamber of Horrors.

Again, I was seven or eight years old when I viewed those displays, some of which were particularly gory and graphic. This nostalgic video includes two of them: a low-quality rendition of the Lon Chaney version of the Phantom of the Opera and a gruesome Algerian Hook (speaks for itself, despite being misspelled in the video).

As an aside, the photograph in the video of the Boardwalk in front of Steel Pier in the summer of 1974 was like stepping out of a TARDIS: that is the Atlantic City I remember. To be fair, I preferred Million Dollar Pier, whose Tilt-a-Whirl I would foolishly ride every weekday, around 12:30 in the afternoon, after eating a slice of pizza from a little stand just where Arkansas Avenue meets the Boardwalk. Seeing that photograph was both exhilarating and painful; I may have known Atlantic City at the very end of its family-resort glory, but I loved being there.

Returning to the Chamber of Horrors, I was both terrified and fascinated by the scenes it depicted. If memory serves, they also included Lee Harvey Oswald being shot by Jack Ruby on November 24, 1963. As deeply unsettling as they were, I could not stop poring over the photographs of those displays in my souvenir booklet back home in Havertown.

But rather than admit they scared the bleepity-frick out of me, I displaced that emotion onto the completely banal and non-threatening (if mildly creepy, in the way all wax figures are mildly creepy) wax renditions of John, Paul, George and Ringo. Simply because they were what I saw before I entered the Chamber of Horror, which truly did scare me. This may not be quite what Sigmund Freud meant by a “screen memory,” but the concept is broadly the same.

In some ways, “interrogating memory” is like the love child of psychoanalytic technique (patiently probing memories to get at any underlying meaning) and the epistemological underpinnings of epidemiology (questioning and verifying everything, putting all data points into context—usually chronological), raised on a steady diet of persistence and a genuine love of history.

Or, to put it even more simply, it is using every technique in your critical toolbox to answer the question, “Hold on a minute, did that really happen that way, then, in that place?”

*********

Speaking of persistence, I may have solved a mystery I first identified here:

Memory 2: One Saturday night in 2002, 2003 or 2004, I took a meandering night drive. Somewhere in Montgomery County, north of Philadelphia, I found myself driving on a “road with a route number.” I then turned left onto a different “road with a route number” to explore further; I may have intended to find this latter road from the start. Sometime later, I find a 24-hour diner (on weekends, at least); I park and enter. I am almost certain I walked up a few concrete steps to do so. It was clean and kind of “retro-modern;” despite my sense of a great deal of black and white in the décor, I also feel like there was a fair amount of neon and chrome. I sat at a small-ish counter (curved?) in a separate room to the right as you entered (there were some booths behind me); in front of me may have been glass shelving stacked high to the ceiling. Behind me and to the left was a large glass window through which I can look down onto an asphalt-covered parking area with at most a few spaces. The diner itself is sort of tucked into a dark urban commercial corner, almost as though it jutted out from an adjoining building. I do not recall what I ordered or what I was reading, or whether I even liked the diner or not. I never returned there, and I can no longer recall the name of the diner or its precise location.

In the post, I concluded I had almost certainly turned north on Route 152 from Business Route 202 that night, eventually wending through the Montgomery County towns of Chalfont, Briarwyck, Silverdale, Perkasie, Sellersville and Telford (where Route 152 ends at Route 309). It was just that none of these towns had the sort of urban-feeling center in which my memory placed the diner.

Frustrated in my efforts to find a diner that fit the necessary criteria, I concluded thus:

I have a sinking suspicion this particular eatery has since closed; this was 15 or so years ago, after all. Or else I have simply mixed up an intersection from one drive with a diner I happened upon in another—though I highly doubt it. What remains mystifying is how this late-night restaurant could have made such an impression on me—yet I have no idea where it is/was or what its name is/was.

As I said, though, a key element of interrogating memory is persistence, so the other night I resolved to trace my possible route that night, starting at the intersection of Routes 152 and Business 202, using StreetView on Google Maps.

Patiently clicking the forward arrow, waiting less patiently for the photographs to resolve on my computer screen, I made my virtual way through Chalfont and Briarwyck and Silverdale and Perkasie into Sellersville. I took a few wrong turns along the way (Route 152, like many state routes, has a habit of randomly turning left or right onto a different street), but always righted myself.

After getting lost multiple times at a particularly tricky five-way intersection, I continued along South Main Street, heading away from the center of Sellersville. In that confusing way of state routes, by following “North” Route 152, I actually travelled south. After passing a few scattered two-story brick houses and local businesses, a large (for the area) parking lot appeared on my left.

In the middle of the lot was a light gray single-story building with a double-sloped roof. The front of the building was a two-story structure from which short flights of concrete steps, under red awnings, protruded. Above each awning was a lighted sign, white with red letters, reading “A & N DINER.” A yellow road sign embedded in the asphalt just beyond the sidewalk read “A & N DINER/ FAMILY RESTAURANT / OPEN 24 HOURS”; with “HAPPY LABOR DAY” spelled out in removable black plastic letters just below that.

Say what now? How did I miss this 24-hour diner in my extensive search?

Something about it seemed vaguely familiar, especially adjusting for the fact these September 2018 photographs were taken during the day, while my drive occurred at night, when the A & N Diner would have been brightly lit in the darkness. I clicked on the map’s icon to learn it is no longer open 24 hours. If that change occurred between Labor Day 2018 and early March 2019, that would explain why I could not find it searching for “24 hour restaurants.”

Scrolling through the accompanying photographs, I observed a small counter area to the left as you entered. One photograph showed five dark pink (almost gray) leather-covered stools bolted to the floor. To the left of the counter was a window, which another photograph confirmed overlooked the parking lot. And the wall one faced sitting at the counter might be the one I recalled—the glass shelving could easily have been replaced since I was (possibly) there in 2003 or 2004 (or existed only in my memory).

The only problem was that this was hardly the urban downtown my memory insisted housed the diner. However, I may have an explanation for that.

One of the classes I took in the first semester of my biostatistics Master’s program at Boston University School of Public Health was on probability theory. While I earned an A on the first of three exams (which comprised ~90% of the final grade), I bombed the second exam. Forget getting an A in the class; I was simply hoping to salvage a B with the final exam. Sometime after that disastrous second exam, say in November 2005, I had a powerful dream. In that dream, in which I learned I did in fact earn an A, it was night. The dark second floor room in which I stood extended far behind me as I stared out a large bay window; perhaps I was in bed first, it is all a bit fuzzy 14 years later. Below me was an urban corner with low buildings, lit by a single street lamp; a kind of brick culvert was off to my right.

This dream made such an impression on me, I still remember it relatively clearly nearly 14 years later. It is possible I mixed up looking out the window into the dark parking lot at the A & N Diner with looking out the window at the urban street corner in the dark in my dream. Why, I could not begin to tell you…unless the former somehow got worked into the latter? I would have to drive to the A & N Diner at night to be certain.

Another slight variation is that I recall the diner being on my right, but I would have approached it from the left that night. That could easily be explained, however, if I parked on the opposite side of the building (putting the diner on my right as I entered) and/or if I drove past it at first, decided to stop in for a snack, and turned around, thus placing the building to my right as I drove to it again.

There is one additional small point of confirmation. In my memory, the diner is shiny and new. Well, a little digging on the invaluable Newspapers.com uncovered a February 2000 article in the NEWS-HERALD of Perakasie, PA[4]. The gist of the article is that Nicholas and Vasso Scebes had assumed control of Angelo’s Family Restaurant on January 31, 2000, renaming it A & N Diner and Family Restaurant.

The key passage is this:

“Later this month, the manager said, they hope to be settled in enough to change the environment of the restaurant, starting with the interior wall colors, which are currently a bright two-tone lime green. Vasso said that’s the first thing regulars asked to have changed.”

Later in the article, Vasso avowed her intention to “clean up this place and make it respectable.”

If those renovations were completed sometime in 2000, they could well have seemed “shiny and new” three or four years later, when a young man out for a meandering night drive almost certainly stopped in with his book for a meal and lots of decaffeinated coffee, black.

For the record, dreams sometimes do come true. I studied intensely for the final exam, and earned something like a 92. Great, I thought, that will get me a solid B in the course. When I learned I had actually received an A, I e-mailed the professor to make sure he had not made a mistake. No, he said, he thought well enough of my participation in the class to essentially “throw out” the middle exam as an unfortunate outlier. Oh, I replied, thank you very much.

Until next time…

[1] Itself a curious slip of memory, as I originally wrote (from memory) “fourth grade.” I only pulled out these report cards to review a week or two ago.

[2] I am even listening to Abbey Road as I edit this post.

[3] This shift is beautifully rendered in Louis Malle’s 1980 film Atlantic City.

[4] Baum, Charles W., “New family takes over operation of former Angelo’s in Sellersville,” NEWS-HERALD (Perkasie, PA), February 16, 2000, pg. 3.

Organizing by themes VII: Words beginning with “epi-“

This site benefits/suffers/both from consisting of posts about a wide range of topics, all linked under the amorphous heading “data-driven storytelling.”

In an attempt to impose some coherent structure, I am organizing related posts both chronologically and thematically.

In this post, I sketched the winding road on which a 28-year-old man who had just resigned (without any degree) from a doctoral program in government ended up a 48-year-old with a doctorate in epidemiology.

And in this post, that degree turns out to the endgame (for now), not the starting point.

In between those two points, that man found a genuine resting place in the field of epidemiology. So much so, that when his blog—OK, my blog—debuted in December 2016, I was already contemplating the need to publish an epidemiology “primer” to provide context for the many epidemiology-centered posts I just knew I would be writing.

Ultimately, there was only one such post, based upon an unsettling implication from my doctoral research.

This latter post appeared in April 2017, just three months before I decided to stop looking for an epidemiology-related position (or, at least, one that built upon my 19 years as health-related data analyst that was commensurate with my salary history and requirements, education and experience[1]) and focus on writing and my film noir research.

In this two-part series (which includes links to my doctoral thesis and PowerPoint presentations for each of its three component studies), I describe my experience at the 2017 American Public Health Association Annual Meeting & Expo. In January 2017, when I still considered myself an epidemiologist, I submitted three oral presentation abstracts (one for each doctoral thesis study). Two were accepted, albeit after I had announced my career shift. Nonetheless, I traveled to Atlanta, GA to deliver the two talks; the conference became a test of whether the “public health analyst” fire still burned in me the way it had.

APHA 2017 1

APHA 2017 2

Spoiler alert: not so much.

**********

Here is the thing, however.

I still love epidemiology in the abstract. As I wrote in my previous post: “In epidemiology, I had found that perfect combination of applied math, logic and critical thinking…”

In fact, I even have a secular “bible”:

modern epidemiology

In essence, epidemiology was both an analytic toolkit and an epistemological framework: critical thinking with some wicked cool math. Moreover, the notion of “interrogating memory” is informed by my desire to “fact-check” EVERYTHING–I am innately a skeptic.

Well–I was not ALWAYS a skeptic.

And much of my writing about contemporary American politics reflects my concern that the United States is facing an epistemological crisis.

Given my ongoing love for epidemiology (even if it is not currently how I make a living) and my desire to promote critical thinking, it is very likely I will revisit my doctoral field in the future on this blog.

Until next time…

[1] I hesitate to say that I was the victim of age discrimination (at the age of 50), since I cannot back up that assertion with evidence. I am on far safer ground noting that the grant-funded positions I occupied for most of the last two decades barely exist anymore.

Two posts diverged…though not in a yellow wood

This post began as the seventh in the “organizing by themes” series, the one that would contain annotated links to my posts related to epidemiology, epistemology, public health and career changes.

THAT post may be found here.

When I started writing, though, I realized that I was telling the full back story of my adult professional and graduate student life. So rather than clunkily shoehorn the “theme organization” post at the end, I acceded to the inevitability of two distinct posts.

This was not the first time I had started writing one post only to find myself writing an entirely different post; it is a welcome process of literary free association.

**********

As I have alluded to elsewhere, I sort of stumbled into my previous career as a health-related data analyst.

On June 30, 1995, I walked away without a degree from a six-year-long pursuit of a doctorate in “government” (rea: political science) from Harvard’s Graduate School of Arts and Sciences (GSAS). In June 2015, however, I applied for—and received[1]—the Master’s Degree for which I had already qualified when I resigned; it was not the worst consolation prize ever.

IMG_2337 (3).JPG

With no idea what to do next (other than remain in the Boston area, having just moved into an apartment with my girlfriend of two years) and a set of quantitative and “critical thinking” skills, I spent the summer of 1995 performing data entry at a long-defunct firm called Pegasus Communications. That bought me some time…though I did not use it as wisely as I could have.

The following January, despite my better judgment, I accepted a position as an Assistant Registrar at Brandeis University. To this day, I do not know why I was offered the position: I was a 29-year-old political science major with zero experience in higher education administration who would be supervising three highly-competent professional women a few decade older than me.

In retrospect, I think my relative youth and inexperience equated to “willing to work long hours for a lower salary.”

Still…you get what you pay for: it was a terrible fit from the start, and I was unceremoniously let go late in May. As relieved as I was to be free from that position, that was the most drunk I would be until the day my mother was buried in March 2004[2].

Regrouping, I narrowed my focus to positions which would allow me to utilize the data analytic skills I had acquired at Yale and Harvard (though, in retrospect, I did not know nearly as much as I thought I did).

My break came in October 1996—just after I turned 30. I accepted an Analyst position with Health and Addictions Research, Inc. (HARI), in part using baseball statistics. And for the first time, I truly enjoyed a full-time adult job[3]. However, the federal grant funding for this position expired (not for the last time) in June 1998, so a few months later I moved on to North Charles Research and Planning Group then the MEDSTAT Group. These latter two gigs were, in order, horrific and not-bad-for-a-few-months.

All of these companies were located in or near Boston (and no longer exist in late-1990s form). However, as 2000 ended, so did my relationship with the woman my wife Nell half-jokingly calls my first wife. As a result, I decided to resign from MEDSTAT and seek a fresh start in the Philadelphia area, where I was raised.

I actually had a good position lined up with a psychometrics firm in King of Prussia (about 21 miles northwest of Philadelphia), but for still-unexplained reasons, I was “unhired” two days before I was scheduled to start. Nothing breeds paranoia like “we are withdrawing our offer but we won’t tell you why!”

The silver lining, however, was that I was unemployed when a Senior Research Associate position became available at the Family Planning Foundation of Southeast Philadelphia (FPC) in June 2001.

This was where a collection of loosely-related health data positions became a full-fledged career in “health-related data analysis.” Following the abrupt departure of my initial supervisor, I effectively ran a grant-funded research project. When that project ended after one year, I was promoted to direct a new grant-funded project; this latter project remains the most rewarding professional work I have ever done.

In the meantime, I was preparing and delivering talks at scientific conferences (American Public Health Association, Eastern Evaluation Research Society—on whose Board of Directors I would serve for a year). My colleagues and I wrote and published a peer-reviewed journal article for yet a third grant-funded project; I was listed as second author[4]. When the woman who directed the Research Department retired, she hired me as a data-analytic consultant.

And so forth.

That first project for which I was hired related to the association between the establishment of neighborhood youth development activities and teen pregnancy rates. As I recall (more than 16 years later), these activities were established in selected zip codes in North Philadelphia (the “exposed” group), but not in West Philadelphia (the “unexposed” group—unless it was the other way around.

FPC was one of 12 sites chosen nationwide to receive one of these teen pregnancy prevention grants. At the end of the project, we began to write an article summarizing our findings. This was scheduled to appear in a special edition of a peer-reviewed journal (I forget which one) presenting the results from each funding site. While I was well-educated in quantitative methods (albeit from a social science perspective), we needed a more specific type of statistical expertise.

Enter Dr. Constantine Daskalakis on a consulting contract.

This man was a revelation to me. I had not known there was such a thing as “biostatistics,” and, despite working in public health as a data analyst, I was only vaguely aware of what “epidemiology” was.

In fact, all I really knew about epidemiology was an odd remark my Harvard doctoral committee chair made while teaching one of my graduate American politics classes: “Getting a PhD in political science is tough, but if you really want to do something hard, get a PhD in epidemiology.”

Make of this what you will: I did not complete the political science doctorate; I did complete the supposedly much-harder epidemiology doctorate.

What most impressed me about Dr. Daskalakis-who had only recently completed his own biostatistics/epidemiology doctorate—was his sheer clarity of thought. He laid out an effective analytic approach in a few quick steps.

It was, for all intents and purposes, my first epidemiology lesson.

For various reasons (the timing and efficacy of the youth development activities was wonky?), we wrote a solid draft but never submitted it for publication; there went my first chance to be a first author.

Until then, I had fully rejected the idea of completing a doctorate in a different field; the wounds were still too raw. But the idea of directing my own grant-funded projects—even directing a non-profit research department myself—began to appeal to me. And that would require pursuing a public-health-related doctorate in either biostatistics or epidemiology (they were already cleaving into distinct fields of study).

It remained simply a vague notion, however, until the summer of 2004 when in quick succession 1) my mother died, leaving my stepfather and I co0-executors of her modest (but not trivial) estate, 2) the second grant project ended, 3) the next grant-funded project proved less appealing and 4) the siren call of Boston grew ever louder, especially after a trip there which combined a HARI reunion and catching up with friends at the 2004 Democratic National Convention[5].

At the reunion, I heard excellent things about the Boston University School of Public Health (BUSPH). With no desire to return to Harvard (and/or fearing they would not want me back, even in a different graduate school), that was the only viable option I had.

That Fall, as the lawyer-driven[6] rift between my stepfather and me grew wider, a solution to our impasse occurred to me: sell the condominium my mother had intended me to have (and from which I was earning rent) and use the proceeds to pursue a doctorate at BUSPH.

Starting around my 39th birthday, no less.

My intention had been to apply for a doctorate in epidemiology, but the deadline for biostatistics was later, so that was what I chose. My GRE scores had long since expired, so I needed to take those again. My scores, after re-learning how to study for any kind of exam (the last time I had taken anything close to an exam was May 1991, when I somehow passed my Harvard GSAS oral and written exams), were…good enough.

But when I submitted my application to BUSPH, their response was a qualified acceptance: given how many years (20) had passed since I had taken a pure mathematics class, they enrolled me in the Master’s Degree program. I was excited and disappointed in roughly equal measure.

[Spoiler alert: they were not wrong]

Nonetheless, I was returning to Boston for what was shaping up to be a multi-step process. I submitted my resignation at FPC, and left—with an emotional send-off—at the end of June 2005.

In the meantime, I was still waiting for my stepfather to settle my mother’s estate with me…which he finally did in July 2005. In the interim, I had to borrow money from a friend to secure the apartment I had located in the Boston suburb of Waltham (yes, where Brandeis is located).

The final dispensation check was dated August 9, 2005; I know the date because I took an enlarged photocopy of it (it is resting comfortably in a filing cabinet behind me and to the left). No, I am not going to include a photograph of the photocopy.

However, just bear with me for a brief romantic digression.

*********

On October 31, 2005, my first Halloween night back in Boston, I received a message from a woman named “Nell” on Friendster, one of the original social networks (and quasi-dating site). On a lark, I had posted on my profile page 10 trivia questions based upon key interests/likes (sample question: “Freddie Freeloader sits between what two greats?”[7])

Only a few miles away in the Boston neighborhood of Brighton, Nell, a private school teacher from Washington DC, was bored. Something about my profile appealed to her, so she took the time to research the questions to which she did not already know the answers.

Naturally, I was deeply flattered—and intrigued by her profile (and, later, her use of the word “persiflage” as the subject line for her first e-mail to me). We struck up a  brief correspondence then went on our first date (meeting in Harvard Square to eat at Bertucci’s—which is no longer there—and watch Good Night, and Good Luck—at a movie theatre which no longer exists). I was so nervous, I kept dropping the movie tickets.

I must not have been too nervous, though: we married 23 months (and one day) later[8].

*********

My plan had been to complete all of my coursework in two semesters (while not earning any income other than interest) to save money. I had already paid off some substantial credit card debts and lingering student loans—and a few days after I returned to Boston, my 1995 Buick Century died. Rather than incur new debt, I paid in full for my black 2005 Honda Accord (it was love at first sight when I spotted it on the dealership lot); I still drive that Accord.

Four courses a semester proved too stressful, though, so I paid for an additional semester.

On a Thursday night in early September 2005, I drove down to the Albany Street campus, parked and walked into a classroom—more of a small auditorium, really—for the first time (as a student) in nearly 15 years. It was Dan Brooks’ Introduction to Epidemiological Methods; the two disciplines may have cleaved into different departments but they were still interconnected.

And, just like that, I was home. In epidemiology, I had found that perfect combination of applied math, logic and critical thinking I had not even known I was searching for until I found it. Even as I labored joyfully through, first, Intermediate then Modern Epidemiology (perhaps the best course I have ever taken), I knew I would soon be applying to the BUSPH doctoral program in epidemiology.

It had to be soon, actually, because my GRE scores would expire in 2010.

By January 2007, I had completed both my “theoretical” and “applied” qualifying exams, and I received my diploma a short time later. I had already parlayed my impending degree into a Quality Researcher position at the Massachusetts Behavioral Health Partnership (MBHP), where I would remain until I was laid off (expiration of grant funds again) in June 2010.

My application to the BUSPH epidemiology doctoral program was accepted early in 2009 (“We were wondering when you were going to apply!”), and I enrolled that September. Thank goodness I did, because when I left MBPH the following June, we lost our health insurance; BUSPH picked up the slack.

In May 2011, I accepted an Outcomes Analyst position with Joslin Diabetes Center, where I would remain until June 2015, when—you guessed it—the federal grant funding expired. Yes, not only did my father die on June 30 (1982), I left four different positions (only one truly voluntarily) on that day in 1998, 2005, 2010 and 2015. And yet it is not even close to my least favorite day of the year; I reserve that honor for Valentine’s Day, which I utterly loathe.

Unlike my doctoral program at Harvard, the BUSPH epidemiology program had an elegant, well-ordered rhythm to it: two years of coursework—culminating with the dreaded hurdle known colloquially as “Dan Brooks’ seminar.” After that came the “biostatistics” and “epidemiology” qualifying exams, selection of a three-person committee and a thesis topic, drafting of a short letter of intent outlining the three connected studies you were going to conduct, drafting of a very-detailed 25-page outline of the final dissertation, then the researching and writing of the thesis itself.

Nothing to it, he wrote with a shudder of remembrance.

And, of course, what followed that five-year journey (nine if you count the biostatistics MA) was the doctoral defense.

Oh my…the defense.

img_1460

Technically, this photograph was taken (on the late afternoon of December 16, 2014) after I had successfully defended (when the three doctoral committee members leave the room to “confer”—and return with cake and champagne), but my slides are still being projected, so it is close enough.

Not long after, I collected this from…somewhere…on campus.

IMG_1757 (2).JPG

Nearly 20 years after I had walked away from one doctoral program, I had successfully completed an entirely different one.

And this is essentially where you came in to the movie.

Until next time…

[1] In December 2015

[2] After the funeral (at which I eulogized my mother), I spent much of the evening walking around my late stepfather’s house, where we were sitting shiva for my mother, swigging directly from a bottle of Scotch. When I walked out the house later that night in the direction of my parked car, a family friend with the superb nickname “Yo!” said he would “rip out [my] fucking distributor cap” if I attempted to do drive myself home. Not being a complete fool, I permitted a close male cousin to drive me home.

[3] And where I taught myself my first geographic information systems (GIS) software package.

[4] A 2000 article based on HARI research listed me as third author.

[5] In June 1991, a late friend of mine from suburban Philadelphia asked me to come to St. Louis to support his candidacy for Treasurer of the Young Democrats of America. I rented a car and drove to St. Louis, renting my very own room in the conference hotel, and joining the Pennsylvania delegation. I became friends with some members of the Alaska delegation, one of whom served as a whip at the 2004 convention in Boston. She was the one who invited me to Boston. I was actually in the rafters of the Fleet Center (the former Boston Garden, now the TD Garden) for former president Bill Clinton’s address—having walked by then-Representative Dennis Kucinich of Ohio on the way in to the building. I was in a local bar watching with dropped jaw as a charismatic young Illinois State Senator and candidate for United States Senate named Barack Obama gave the keynote address. While I was there, Mr. Obama spoke to few dozen or so people at nearby Christopher Columbus Waterfront Park; I saw his speech, but I regret not meeting him and/or getting a photograph with him.

[6] I still do not quite understand why he chose to fight my mother’s—his wife’s—crystal-clear distribution of what property she had. But he did so—then tried to intimidate me by hiring a man named Vito Canuso, who had been the chair of the Philadelphia Republican Party…at some point. I countered by hiring the lawyer—Barbara Harrington Hladik—my mother had used for my sister Mindy’s guardianship hearing (she is severely mentally retarded; I am her legal guardian now). It was a mismatch from the start—Canuso never had a chance.

[7] Answer: “Freddie Freeloader” is the 2nd track on the Miles Davis masterpiece Kind of Blue, “sitting” between “So What” and “Blue in Green,” my favorite track…period.

[8] It was not all smooth sailing—but we made it there in the end.

Separating the art from the artist

The director David Lynch—who I dressed as this past Halloween—gave this response to a question about the meaning of a puzzling moment toward the end of episode 15 of Twin Peaks: The Return.

“What matters is what you believe happened,” he clarified. “That’s the whole thing. There are lots of things in life, and we wonder about them, and we have to come to our own conclusions. You can, for example, read a book that raises a series of questions, and you want to talk to the author, but he died a hundred years ago. That’s why everything is up to you.”

On the surface, this is a straightforward answer, one Lynch has restated in different ways over the years: the meaning of a piece of art is whatever you think it is. Every individual understands a piece of art through her/his own beliefs and experiences.

I am reminded of a therapeutic approach to the interpretation of dreams that particularly resonates with me.

You tell your therapist what you remember of a dream. The therapist then probes a little more, attempting to elicit forgotten details. The conversation then turns to the “meaning” of the dream. Some therapists may pursue the Freudian notion of a dream as the disguised fulfillment of a repressed wish (so what is the wish?). Other therapists may look to the symbolism of characters and objects in the dream (is every character in a dream really a version of the dreamer?) for interpretation.

Then there is what you might call the Socratic approach; this is the approach that resonates with me. The therapist allows the patient to speculate what s/he thinks the dream means. Eventually, the patient will arrive at a meaning that “clicks” with her/him, the interpretation that feels correct. The therapist then accepts this interpretation as the “true” one.

That the “dreams mean whatever you think they mean” approach aligns nicely with Lynch’s musing is not surprising, given how central dreams and dream logic are to his film and television work.

We live inside a dream

However, there is a subtext to Lynch’s musing about artistic meaning that is particularly relevant today.

**********

The November 20, 2017 issue of The Paris Review includes author Claire Dederer’s essay “What Do We Do with the Art of Monstrous Men?”

I highly recommend this elegant and provocative essay.

For simplicity, I will focus on two questions raised by the essay:

  1. To what extent should we divorce the artist from her/his art when assessing its aesthetic quality?
  2. Does successful art require the artist to be “monstrously” selfish?

Dederer describes many “monstrous” artists, nearly all men (she struggles when cataloging the monstrosity of women, despite how odious she finds the impact of Sylvia Plath’s suicide on her children) before singling out Woody Allen as the “ur-monster.”

And here is where I discern a deeper meaning in Lynch’s “dead author” illustration.

Lynch’s notion that one brings one’s own meaning to any piece of art is premised on the idea that the artist may no longer be able to (or may choose not to) reveal her/his intent.

But that implies that something about the artist is relevant to understanding her/his art. Otherwise, one would never have sought out the artist in the first place.

The disturbing implication is that it is all-but-impossible to separate art from artist.

This is Dederer’s conundrum, and it is mine as well.

**********

A few years ago, a group of work colleagues and I were engaging in a “getting to know each other” exercise in which each person writes down a fact nobody else knows about them, and then everyone else has to guess whose fact that is.

I wrote, “All of my favorite authors were falling-down drunks.”

Nobody guessed that was me, which was a mild surprise.

Of course, the statement was an exaggeration, a tongue-in-cheek poke at the mock seriousness of the process.

Still, when I think about many of the authors I love, including Dashiell Hammett, Raymond Chandler, Edgar Allan Poe, John Dickson Carr, Cornell Woolrich, David Goodis[1]

…what first jumps to mind is that every author I just listed is male (not to mention inhabiting the more noir corners of detective fiction). So far as I know, my favorite female authors (Sara Paretsky, Ngaio Marsh and Agatha Christie, among others) do/did not have substance abuse problems.

Gender differences aside, while not all of these authors were alcoholics, they did all battle serious socially-repugnant demons.

Carr, for example, was a virulently racist and misogynistic alcoholic.

He also produced some of the most breathtakingly-inventive and original detective fiction ever written.

Woolrich was an agoraphobic malcontent who was psychologically cruel to his wife during and just after their brief, unconsummated marriage[2].

He also basically single-handedly invented the psychological suspense novel. More films noir (including the seminal Rear Window) have been based on his stories than those of any other author.

And so forth.

It is not just the authors I admire who are loathsome in their way.

I never ceased to be amazed by the music of Miles Davis, who ranks behind only Genesis and “noir troubadour” Stan Ridgway in my musical pantheon. His “Blue in Green” is my favorite song in any genre, and his Kind of Blue is my favorite album.

But this is the same Miles Davis who purportedly beat his wives, abused painkillers and cocaine, was taciturn and full of rage, and supposedly once said, “If somebody told me I only had an hour to live, I’d spend it choking a white man. I’d do it nice and slow.[3]

Moving on, my favorite movie is L.A. Confidential.

Leaving aside the shenanigans of co-star Russell Crowe, there is the problem of Kevin Spacey, an actor I once greatly respected.

Given the slew of allegations leveled at Spacey, the character arc of his “Jack Vincennes” in Confidential is ironic.

But first, let me warn any reader who has not seen the film that there are spoilers ahead. For those who want to skip ahead, I have italicized the relevant paragraphs.

Vincennes is an amoral 1950s Los Angeles police officer whose lucrative sideline is selling “inside” information to Sid Hudgens, publisher of Hush Hush magazine, reaping both financial rewards and high public visibility. Late in the film, he arranges for a young bisexual actor to have a secret (and then-illegal) sexual liaison with the District Attorney, a closeted homosexual. Vincennes and Hudgens would then catch the DA and the young actor in flagrante delicto.

Sitting in the Formosa Club that night, however, Vincennes has a sudden pang of conscience and leaves the bar (symbolically leaving his payoff—a 50-dollar bill—atop his glass of whiskey), intending to stop the male actor from “playing his part.” Unfortunately, he arrives at the motel room too late; the actor has been murdered.

Determined to make amends, he teams up with two other detectives to solve a related set of crimes, including the murder of the young actor. In the course of his “noble” investigation, he questions his superior officer, Captain Dudley Smith, one quiet night in the latter’s kitchen. Realizing that Vincennes is perilously close to learning the full extent of his criminal enterprise, Smith suddenly pulls out a .32 and shoots Vincennes in the chest, killing him.

OK, the spoilers are behind us.

**********

This listing of magnificent art made by morally damaged people demonstrates I am in the same boat as Claire Dederer: I have been struggling for years to separate art from artist.[4]

And that is before discussing the film that serves as Dederer’s Exhibit A: Woody Allen’s Manhattan.

Dederer singles out Manhattan (still one of my favorite films) because of the relationship it depicts between a divorced man of around 40 (Isaac, played by Allen himself) and a 17-year-old high school named Tracy (Mariel Hemingway).

Not only is the relationship inherently creepy (especially in light of recent allegations by Hemingway and the fact that in December 1997, the 62-year-old Allen married the 27-year-old Soon-Yi Previn, the adopted daughter of his long-time romantic partner Mia Farrow[5]), but, as Dederer observes, the blasé reaction to it from other adult characters in the film makes us cringe even more.

As I formulated this post—having just read Dederer’s essay—I thought about why I love Manhattan so much.

My reasons are primarily aesthetic: the opening montage backed by George Gershwin’s Rhapsody in Blue (and Allen’s voiceover narration), Gordon Willis’ stunning black-and-white cinematography, the omnipresence of a vibrant Manhattan itself.

In addition, the story, a complex narrative of intertwined relationships and their aftermath, is highly engaging. The dialogue is fresh and witty—and often very funny. The characters are quirky (far from being a two-dimensional character, I see Tracy as the moral center of the film) but still familiar.

And then there is the way saw the film for the first time.

The movie was released on April 25, 1979. At some point in the next few months, my father took me to see it at the now-defunct City Line Center Theater (now a T.J. Maxx) in the Overbrook neighborhood of Philadelphia. Given that I was 12 years old, it was an odd choice on my father’s part, but I suspect he wanted to see the film and seized the opportunity of his night with me (my parents had been separated two years at this point) to do so.

City Line Theater

I recall little about seeing Manhattan with him, other than being vaguely bored. I mean, it was one thing for old movies and television shows to be in black-and-white (like my beloved Charlie Chan films), but a new movie?

I do not remember when I saw Manhattan again. At one of Yale’s six film societies? While flipping through television channels in the 1990s? Whenever it was, the film clicked with me that second viewing, and I have only become fonder of it since then.

Two observations are relevant here.

One, it is clear to me that the fact that I first saw Manhattan at the behest of my father, who I adored in spite of his many flaws, heavily influenced my later appreciation of the film[6].

Two, this appreciation cemented itself years before Allen’s perfidy became public knowledge.

These two facts help explain (but not condone) why I still…sidestep…my conscience to admire Manhattan as a work of art.

**********

Ultimately, I think the following question best frames any possible resolution of the ethical dilemma of appreciating the art of monstrous artists:

Which did you encounter first, the monstrous reputation of the artist…or the art itself?

I ask this question because my experience is that once I hear that a given artist is monstrous, I have no desire to experience any of her/his art.

Conscience clear. No muss, no fuss.

That includes not-yet-experienced works by an artist I have learned is loathsome. I have not, for example, seen a new Woody Allen since the execrable The Curse of the Jade Scorpion in 2001.

But if I learn about the artist’s monstrous behavior AFTER reacting favorably to a piece of her/his art, I will often find myself still drawn to the art.[7]

Conscience compartmentalized. Definitely some muss, some fuss.

My love of these works is just too firmly embedded in my consciousness to unwind. Thus, I still love the music of Miles Davis. L.A.Confidential remains my favorite movie. Manhattan may have dropped some in my estimation, but it is still in my top 10.

I am reminded of this line from “Seen and Not Seen” on the Talking Heads album Remain in Light:

“This is why first impressions are often correct.”

**********

And here is where I think Lynch’s impressionistic approach to finding meaning in art and the patient-centered approach to dream interpretation—art and dreams mean whatever we think they mean—relate to the question of loving art while loathing the artist.

Art is a deeply personal experience. The “Authority” Dederer so pointedly disdains in her essay can provide guidance, but (s)he cannot experience the art for you or me.

Put simply, each of us is an “Authority” on any given piece of art—and also on whether or not to seek out that art.

For example:

As a child, I found myself hating The Beatles simply because I was supposed to love them. However, once I discovered their music on my own terms, purchasing used vinyl copies of the “Red” and “Blue” albums (which I still own 30+ years later) along with Abbey Road, The Beatles (the “White” Album), Sgt. Peppers’s Lonely Hearts Club Band, Revolver and Rubber Soul…suffice to say I have 124 Beatles tracks (out of 9,504) in my iTunes, second only to Genesis (288). The Beatles also rank sixth in total “plays” behind The Cars, Steely Dan, Miles Davis (there he is again), Stan Ridgway and Genesis.

Each of us is also the Authority on our changing attitudes toward a given piece of art, including what we learn about the artist, knowledge which then becomes one more element we bring to the subjective experience of art.

**********

Dederer speculates about whether artists (particularly writers) somehow NEED to be monstrous to be successful.

(Upon writing that last sentence, the phrase “madness-genius” began to careen around my brain).

As a writer with advanced academic training in epistemology-driven-epidemiology, I would suggest this study to assess this question.

A group of aspiring artists who had not yet produced notable works would be identified. They would be divided into “more monstrous” and “less monstrous,”[8] definitions to be determined. These artists would be followed for, say, 10 years, after which time each artist still it the study would be defined as “more successful” and “less successful,” definitions to be determined The percentages of artists in each category who were “more successful” would be compared, to see whether being “monstrous” made an aspiring artist more or less likely to be “successful,” or even made no difference at all.

This would not settle the question of the link between monstrosity and art by any means, but it would sure be entertaining.

**********

When Dederer talks about the monstrous selfishness of the full-time writer, she focuses on the temporal trade-offs writers must make—time with family and friends versus time spent writing. Writing is an almost-uniquely solitary endeavor, as I first learned writing my doctoral thesis, and as I continue to experience in my new career.

Luckily, my wife and daughters remain strongly supportive of my choice to become a “writer,” so I have not yet felt monstrously selfish.

There is a different kind of authorial “selfishness,” though, that I would argue is both more benign and more beneficial to the author.

When I began this blog, my stated aim was to focus solely on objective, data-driven stories; my personal feelings and life story were irrelevant (outside of this introductory post).

Looking back over my first 48 posts, though, I was surprised to count 17 (35.4%) I would characterize as “personal” (of which three are a hybrid of personal and impersonal). These personal posts, I observed, have also become more frequent.

Even more surprising was how much more “popular” these “personal” posts were. As of this writing, my personal posts averaged 28.4 views (95% confidence interval [CI]=19.9-36.9), while my “impersonal” posts averaged 14.5 views (95% CI=10.8-18.1); the 95% CI around the difference in means (14.0) was 6.3-21.6.[9]

Moreover, the most popular post (77 views, 32 more than this post) is a very personal exploration of my love of film noir.

In other words, while none of my posts have been especially popular (although I am immensely grateful to every single reader), my “personal” posts have been twice as popular as my “impersonal” posts.

I had already absorbed this lesson somewhat as I began to formulate the book I am writing[10]. Initially inspired by my “film noir personal journey” post, it has morphed into a deep dive not only into my personal history, but also the history of my family (legal and genetic) going back three or four generations.

This, then, is the “selfish” part: the discovery that the most popular posts I have written are the ones in which I speak directly about my own life and thoughts, leading me to begin to write what amounts to a “hey, I really like film noir…and here are some really fun stories about my family and me” memoir-research hybrid. One that I think will be very entertaining.

Whether an agent, publisher and/or the book-buying public ever agree remains an open question.

**********

Just bear with me (I had to write that phrase at some point) while I fumble around for a worthwhile conclusion to these thoughts and memories.

I am very hesitant ever to argue that means justify the ends, meaning that my first instinct is to say that art produced by monstrous artists should be avoided.

But I cannot say that because, having formed highly favorable “first (and later) impressions” of various works of art produced by “monstrous” artists, I continue to love those works of art. I may see them differently, but the art itself has not changed. “Blue in Green” is still “Blue in Green,” regardless of what I learn about Miles Davis, and it is still my favorite song.

And that may be the key. Our store of information about a piece of art may change, but the art itself does not change. It is fixed, unchanging.

Of course, if Lynch and the patient-centered therapists are correct that we each need to interpret/appreciate (or not) works of art as individuals, then how we react to that piece of art WILL change as our store of information changes.

Shoot. I thought I had something there.

Well, then, what about the “slippery slope” argument?

Once we start down the path of singling out certain artists (and, by extension, their works of art) for opprobrium, where does that path lead?

The French Revolution devolved into an anarchic cycle of guillotining because (at least as I understand it) competing groups of revolutionaries began to point the finger at each other, condemning rival groups to death as power shifted between the groups.

This is admittedly an extreme example, but my point is that we once start condemning monstrosity in our public figures, it is difficult to stop.

It is also the case that very few of us are pure enough to condemn others. We all have our Henry Jekyll, and we all have our Edward Hyde, within us. I think the vast majority of us contain far more of the noble Dr. Jekyll than of the odious Mr. Hyde, but we all enough of the latter to be wary of hypocrisy.

And if THAT is not a good argument, then I have one more.

Simply put, let us all put on our Lynchian-therapeutic cloaks and make our own decisions about works of art, bringing to bear everything we know and feel and think, including our conscience…while also understanding that blatant censorship (through public boycott or private influence) is equally problematic…

These decisions may be ethically uncomfortable, but as “Authorities,” they are ultimately ours and ours alone.

Until next time…

[1] Fun fact about Goodis: Philadelphia-born-and-raised, he is buried in the same cemetery as my father.

[2] Woolrich was also a self-loathing homosexual.

[3] This quote is found on page 61 of the March 25, 1985 issue of Jet, in a blurb titled “Miles Davis Can’t Shake Boyhood Racial Abuse.” The quote is apparently from a recent interview with Miles White of USA Today, but I cannot find the actual USA Today article.

As a counter, and for some context, here is a long excerpt from Davis’ September 1962 Playboy interview.

Playboy: You feel that the complaints about you are because of your race?

Davis: I know damn well a lot of it is race. White people have certain things they expect from Negro musicians — just like they’ve got labels for the whole Negro race. It goes clear back to the slavery days. That was when Uncle Tomming got started because white people demanded it. Every little black child grew up seeing that getting along with white people meant grinning and acting clowns. It helped white people to feel easy about what they had done, and were doing, to Negroes, and that’s carried right on over to now. You bring it down to musicians, they want you to not only play your instrument, but to entertain them, too, with grinning and dancing.

Playboy: Generally speaking, what are your feelings with regard to race?

Davis: I hate to talk about what I think of the mess because my friends are all colors. When I say that some of my best friends are white, I sure ain’t lying. The only white people I don’t like are the prejudiced white people. Those the shoe don’t fit, well, they don’t wear it. I don’t like the white people that show me they can’t understand that not just the Negroes, but the Chinese and Puerto Ricans and any other races that ain’t white, should be given dignity and respect like everybody else.

But let me straighten you — I ain’t saying I think all Negroes are the salt of the earth. It’s plenty of Negroes I can’t stand, too. Especially those that act like they think white people want them to. They bug me worse than Uncle Toms.

But prejudiced white people can’t see any of the other races as just individual people. If a white man robs a bank, it’s just a man robbed a bank. But if a Negro or a Puerto Rican does it, it’s them awful Negroes or Puerto Ricans. Hardly anybody not white hasn’t suffered from some of white people’s labels. It used to be said that all Negroes were shiftless and happy-go-lucky and lazy. But that’s been proved a lie so much that now the label is that what Negroes want integration for is so they can sleep in the bed with white people. It’s another damn lie. All Negroes want is to be free to do in this country just like anybody else. Prejudiced white people ask one another, “Would you want your sister to marry a Negro?” It’s a jive question to ask in the first place — as if white women stand around helpless if some Negro wants to drag one off to a preacher. It makes me sick to hear that. A Negro just might not want your sister. The Negro is always to blame if some white woman decides she wants him. But it’s all right that ever since slavery, white men been having Negro women. Every Negro you see that ain’t black, that’s what’s happened somewhere in his background. The slaves they brought here were all black.

What makes me mad about these labels for Negroes is that very few white people really know what Negroes really feel like. A lot of white people have never even been in the company of an intelligent Negro. But you can hardly meet a white person, especially a white man, that don’t think he’s qualified to tell you all about Negroes.

You know the story the minute you meet some white cat and he comes off with a big show that he’s with you. It’s 10,000 things you can talk about, but the only thing he can think of is some other Negro he’s such close friends with. Intelligent Negroes are sick of hearing this. I don’t know how many times different whites have started talking, telling me they was raised up with a Negro boy. But I ain’t found one yet that knows whatever happened to that boy after they grew up.

Playboy: Did you grow up with any white boys?

Davis: I didn’t grow up with any, not as friends, to speak of. But I went to school with some. In high school, I was the best in the music class on the trumpet. I knew it and all the rest knew it — but all the contest first prizes went to the boys with blue eyes. It made me so mad I made up my mind to outdo anybody white on my horn. If I hadn’t met that prejudice, I probably wouldn’t have had as much drive in my work. I have thought about that a lot. I have thought that prejudice and curiosity have been responsible for what I have done in music.

[4] This has actually impacted me directly. Privacy concerns prevent me from using names, but I have had long and painful discussions with people close to me who were either related to, or knew very well, artists whose work they admired but who were/are loathsome human beings.

[5] Purportedly, Allen and his quasi-step-daughter (Allen and Farrow never married) had been having a long-term affair.

[6] And, perhaps, of black-and-white cinematography more generally.

[7] There are exceptions to this, of course. As much as I love the Father Brown stories by G.K. Chesterton, his blatant anti-Semitism has likely permanently soured me on his writing.

[8] Acknowledging that “monstrosity” is not binary, but a continuum. We have all had monstrous moments, and even the most monstrous people have had a moment or two of being above reproach.

[9] Using a somewhat stricter definition of “personal” made the difference even starker.

[10] Tentative title: Interrogating Memory: How a Love of Film Noir Led Me to Investigate My Own Identity.

Final thoughts from what is almost certainly my final APHA meeting

I debuted this blog 11 months ago yesterday as a place to tell what I hoped would be entertaining and informative data-driven stories. Given my proclivity for, and advanced academic training in, quantitative data analysis, the vast majority of my 47 prior posts have involved the rigorous and systematic manipulation of numbers.

But not all data are quantitative. Sometimes they are “qualitative,” or simply impressionistic.

A few weeks ago, I wrote a post about my impending trip to Atlanta to attend the American Public Health Association (APHA) Annual Meeting and Expo. This post served two purposes:

  1. To allow me to archive online:
    1. The full text (minus Acknowledgments and CV) of my doctoral thesis (Epidemiology, Boston University School of Public Health, May 2015)
    2. The PowerPoint presentation I delivered in defense of that thesis (minus some Acknowledgment slides) in December 2014
    3. Both oral presentations I delivered at the APHA Meeting
  1. To explore the idea that the decision to change careers (which I detail here) actually began two years earlier than I thought, with the completion of this doctorate.

I submitted three abstracts to APHA (one for each dissertation study) when I was still looking for ways to jumpstart my health-data-analyst job search (and my flagging interest in the endeavor). I was shocked that any of my abstracts were accepted for oral presentation (if only because I had no institutional affiliation) and quite humbled that two were accepted.

Once they were accepted, though, I felt an obligation to prepare and deliver the two oral presentations, despite the fact that I had decided to embark on a different career path.

(I did, however, truncate the length of my attendance from all four days to only the final two days, the days on which I was scheduled to give my presentations.)

I also recalled how much I used to enjoy attending APHA Meetings with my work colleagues. My first APHA Meeting—Atlanta, October 2001—was also the place I delivered an oral presentation to a large scientific conference for the first time.

APHA 2001

                 **********

There are two interesting coincidences related to this presentation.

One, I gave this presentation at the Atlanta Marriott Marquis, the same hotel in which I just stayed for the 2017 APHA Meeting[1].

Two, the presentation itself—GIS Mapping: A Unique Approach to Surveillance of Teen Pregnancy Prevention Efforts (coauthored with my then-supervisor)—drew upon a long-term interest of mine: what you might call “geographical determinism,” which is a pretentious way of saying that “place matters.”

To explain, just bear with me while I stroll down a slightly bumpy memory lane.

I have always loved maps—street maps, maps of historical events, atlases, you name it. As a political science major at Yale, I discovered “electoral geography.” At one point while I was working as a research assistant for Professor David Mayhew, I mentioned the field to him.

Hmm, he responded. I should teach a course about that next semester.

He did.

I still have the syllabus.

As a doctoral student at Harvard (the doctorate I did NOT finish), I formulated a theory for my dissertation about why some areas tended to vote reliably Democratic while others tended to vote reliably Republican that was based on the way demographic traits (e.g., race, socioceconomic status [SES], religion) were distributed among an area’s population. The idea was that because everyone has a race AND an age AND a gender AND a SES level AND a religion AND so on, the areal distribution of these traits makes some more politically salient than others in that area.

Well…it all made perfect sense to me back in the early 1990s.

Because this was not already complicated enough to model and measure, I originally chose to test this theory using data from presidential primary elections, with all of their attendant flukiness. I even spent a pleasant afternoon in Concord, New Hampshire collecting (hand-written) town-level data on their 1976 presidential primary elections.

Did I mention that New Hampshire has 10 counties, 13 cities, 221 towns, and 25 unincorporated places?

From the start, however, it was an uphill battle getting this work taken seriously[2]. One of the four components of my oral exams in May 1991 was a grilling on the electoral geography literature review I had recently completed.

Rather than ask me questions about (for example) J. Clark Archer’s work on the geography of presidential elections, however, the professor who would soon chair my doctoral committee peppered me with questions about why we should study political/electoral geography when academic geography departments were closing or what James Madison’ antipathy to faction said about viewing elections through the lens of geography.

I have no recollection of how I answered those questions, but I know that I passed those exams by the skin of my teeth[3].

(Ironically, just nine years later, the nation would be riveted by Republican “red states” and Democratic “blue states” during the Florida recount that decided the 2000 presidential election between Texas Governor George W. Bush and Vice President Al Gore).

The real kicker, though, came a year later.

Harvard at the time had a program with a name like “sophomore seminars.” These small-group classes were a chance for doctoral students to prepare and teach a semester-length seminar of their own design to undergraduate political science majors.

I eagerly jumped at the chance and applied to teach one in American electoral geography, drafting a syllabus in the process. Once it was accepted, I organized the first class, including getting permission to copy a Scientific American article, which I then made copied.

Towards the end of the summer, they posted (I do not remember where, but it was 1992, so it was literally a piece of paper tacked to a bulletin board) the names of the students who would be taking each seminar.

I looked for my class.

I could not find it.

I soon discovered why. Only one student had signed up (and it was not even her/his first choice), so the seminar had been cancelled.

That was one of the most crushingly disappointing moments of my life.

In retrospect, this was most likely when my interest in completing this doctoral program began to seriously wane—even though I stuck it out for three more years.

(In a bittersweet bit of irony, five years after I walked away from that doctoral program came the 2000 U.S. presidential election. Because of the month-long Florida recount, the “red state-blue state” map of the election burned into the public consciousness. Electoral geography, at least at this very basic level, suddenly became a “thing.” To this day, there is talk of “red,” “blue” and even “purple” states.)

The good news was that the idea of looking at data geographically still appealed to me tremendously, and I was lucky enough to be able to learn and use ArcGIS mapping software in my first professional job as a health-related data analyst. The best moment in this regard there came when I produced a town-level map of alcohol and substance use problems in Massachusetts. The towns with the most severe issues were colored in red, and I noticed that they followed two parallel east-west lines emanating from Boston, and that they were crossed by a north-south line in the western part of the state.

Oh, I exclaimed. The northern east-west line is Route 2, the southern east-west line is I-90 (the Massachusetts Turnpike) and the intersecting north-south line is I-91. Of course, these are state-wide drug distribution routes.

Three professional positions later, temporarily living in Philadelphia, I was doing similar work, but now in the area of teen pregnancy–which brings us back to the oral presentation I delivered late on the afternoon of November 7, 2017 and to the second coincidence.

Its title was “Challenges in measuring neighborhood walkability: A comparison of disparate approaches,” and it was the second presentation (of six) in a 90-minute-long session titled Geo-Spatial Epidemiology in Public Health Research.

In other words, 16 years after my first APHA oral presentation, in the same city, I was once again talking about ways to organize and analyze data geographically.

And while the five-speaker session in which I spoke the following morning (Social Determinants in Health and Disease) was not “geo-spatial,” per sé, the study I discussed (“Neighborhood walkability and depressive symptoms in black women: A prospective cohort study”) did feature a geographic exposure.

**********

I again coauthored and delivered oral presentations at the APHA Meetings in 2002[4] (Philadelphia) and 2003 (San Francisco); for the 2004 Meeting (Washington, DC) I prepared a poster which I displayed along with a woman I supervised.

That talented young woman—now one of my closest friends—was a huge reason why the 2003 APHA Meeting in San Francisco was so memorable. Other, of course, than the fact that it was IN SAN FRANCISCO!

IMG_1547

IMG_1546

IMG_1533

IMG_0853

As much as fun as it was to wander through the exhibit halls and chat with the folks from schools of public health, research organizations, public health advocacy groups, medical device firms and so forth; to amass a full bag of free goodies (“swag,” I prefer to call it) in the process; to read and ask questions about scientific posters; and to sit in a wide range of scientific sessions…

(no, I am serious. I really used to enjoy that stuff, especially in the company (during the day and/or over dinner and drinks in the evenings) of friendly work colleagues)

…after about two days, my colleague and I had had enough.

So we literally played hooky from the Meeting one day.

First, I dragged the poor woman on a “Dashiell Hammett” tour, which took place only a few blocks from our Union Square hotel.

IMG_0736

IMG_0738

Then, we meandered through Chinatown (whose entrance was mere steps away)—stopping for bubble teas along the way—all the way to Fishermen’s Wharf.

IMG_0742

Our ultimate destination was the ferry to Alcatraz. The Alcatraz tour may have been the highlight of that trip. That place is eerie, creepy and endlessly fascinating.

IMG_1557

Someday I will take my wife and daughters there.

That Meeting was also the apex of my APHA experiences. After three years of them, the 2004 version in DC felt stale. I skipped the 2005 APHA Meeting in Philadelphia, as I had just returned to Boston to start my master’s program in biostatistics at Boston University, though I did briefly attend the 2006 APHA Meeting since it was in Boston, and it was a chance to see former work colleagues.

**********

Ultimately, then, attending the 2017 APHA Meeting in Atlanta was a life experiment, a way to gather qualitative “data” to assess the notion that I had put a health-related data analysis career behind for good.

I arrived in Atlanta on the evening of November 6 and took a taxi to the Marriott Marquis.

Holy moley, is this place huge…and it had those internal glass elevators which allow passengers to watch the lobby recede or approach at great speed.

IMG_3284

It was both liberating and lonely not to have work colleagues attending with me. As great as it was not to have to report to anybody, it also meant my time was far more unstructured (other than attending the sessions in which I was presenting).

On Tuesday morning, I dressed in my “presentation” clothes and made my way to the Georgia World Congress Center. This meant taking a mile-long walk in drenching humidity carrying a fully-packed satchel because the APHA chose to reduce its carbon footprint by eliminating shuttle buses.

So I was a sweaty mess when I arrived at the heart of the action. Still, I soldiered on, registering and then checking the location of my session room (luckily, both of the my sessions were in the same room—if only because it allowed me, on Wednesday morning, to retrieve the reading glasses I had left on the podium Tuesday evening).

This place was also massive and labyrinthine. It took me a good 30 minutes just to locate the Exhibit Halls.

I wandered through them for an hour or so, talking to some interesting folks and reading a couple of posters. The swag was wholly uninspiring, I am sorry to say.

And I felt…nothing.

No pangs of regret.

No overwhelming desire to return to this field of work.

No longing for work colleagues (other than a general loneliness).

In fact, I mostly felt like a ghost, the way one sometimes does walking around an old alma mater or place you used to live.

This was my past, and I was perfectly fine with that[5].

That is not to say I did not enjoy giving my talks (which were very well received—I am usually nervous before giving oral presentations…until I open my mouth, and the performer in me takes charge). I did, very much. I also enjoyed listening to the nine other speakers with whom I shared a dais. I picked up terms like “geographic-weighted regression” I plan to explore further. I even took the opportunity to distribute dozens of my new business cards (the ones that describe me, tongue somewhat in cheek, as “Writer * Blogger * Film Noir Researcher * Data Analyst”).

But none of that altered my conviction that I have made the right career path decision. I have no idea where the writing path will ultimately lead (although the research for my book has already taken me down some unexpected and vaguely disturbing alleys), professionally or financially, but I remain glad I chose that path.

One final thing…or perspective.

Tuesday, November 7 was also the day that governor’s races were held in New Jersey and Virginia, along with a mayor’s race in New York City and a wide range of state and local elections nationwide.

I had expected to settle in for a long night of room service and MSNBC viewing, but the key races were called so early that I decided to take quick advantage of the hotel swimming pool.

Yes, I waited at least 30 minutes after eating to enter the water.

The pool at the Atlanta Marquis Marriott is primarily indoors (and includes a VERT hot hot tub, almost—but not quite—too hot for me), but a small segment of it is outside; you can swim between the two pool segments through a narrow opening.

If you look directly up from the three shallow steps descending into the outdoor segment of the pool, you see this (if you can find the 27th floor, one of those windows was my room):

IMG_3287

I literally carried my iPhone into the pool to take this photograph, leaning as far back as I could. Thankfully, I did not drop my iPhone in the pool.

Until next time…

[1] The coincidence is not perfect, though, as I do not think we STAYED at the Marriott Marquis in 2001.

[2] Other than the fact that I was awarded a Mellon Dissertation Completion Fellowship in 1994. It was kind of a last-ditch spur to completion. It did not work.

[3] This was the same professor who proclaimed as an aside in a graduate American politics seminar that if you really want to do something hard, get a PhD in epidemiology. Which, of course, I did…25 years later.

[4] Where the Keynote Address was delivered—passionately and to great applause—by an obscure Democratic governor of Vermont named Howard Dean, whose presidential campaign I supported from that moment.

[5] The one caveat to this blanket page-turning is my ongoing interest in the geographic determinism, which I am indulging through state- and county-level analyses of the 2016 presidential elections. This may be the one successful way to lure me back into the professional data-analytic world.

As I head to the APHA meeting in Atlanta in November…

There have been times, especially lately, that I start to write one post and end up writing an entirely different post.

I originally conceived this post to be a simple repository for a set of documents related to my previous career. The impetus for this was two oral presentations I will be delivering in Atlanta on November 7 and 8, 2017.

As I began to explain why I was posting these documents, however, I found myself plummeting down a rabbit hole, describing a series of unpleasant interactions I had with my doctoral committee a few months after I successfully defended my doctoral dissertation in epidemiology.

It made sense to me at the time (doesn’t it always?), but it soon dawned on me that the tone of that section was…off, and that this is simply not the venue to rehash these private interactions, even as I am still processing them.

But once I stepped back (metaphorically, as I was sitting down at the time), I understood more clearly what I was trying to say.

Let me start at the beginning, if you will just bear with me…

**********

While writing my doctoral dissertation, the members of my doctoral committee and I agreed in principle that after my defense we would work together to publish as many as three peer-reviewed journal articles from it (publication was not a graduation requirement).

From my perspective—a 48-year-old married father of two who was 18 years into career as a health-related data analyst/project manager—publication was more “cherry on top” than  necessity, and perhaps also a courtesy to the members of my doctoral committee and other Boston University School of Public Health (BUSPH) personnel to whom I felt grateful.

I defended my dissertation on December 16, 2014. I was not actually in dark shadows, nor was there a bottle of champagne in front of me, but I love this noir-tinted photograph, and it gives you the flavor of that happy day.

IMG_1458

This was my moment of vindication, the culmination of a journey I had started 26 years earlier. In September 1989, I enrolled in a doctoral program in government at Harvard’s Graduate School of Arts and Sciences (GSAS). Six years later, I resigned from that program with no degree to show for my time there[1]. But just 15 months later I landed the data analyst gig with a Boston non-profit specializing in substance use and abuse that launched my career. Nine years after that, following a four-year sojourn in Philadelphia, I was back in Boston, enrolling in the BUSPH biostatistics master’s degree program. Four years later, I enrolled in their doctoral program in epidemiology.

**********

I have written elsewhere about the deliberations that led me to walk away from that analytic career towards a writing career (although this blog still allows me to analyze data and write about my findings). That transition “officially” occurred in late June 2017.

However, in February 2017, before I made the career-change leap, I was still actively pursuing positions related to my doctoral studies (assessing the health impact of the built environment, as I detail here).

A few months earlier, I had renewed my long-lapsed membership in the American Public Health Association (APHA); that is how I knew that they would be holding their Annual Meeting & Expo (Meeting) in Atlanta, Georgia November 4-8, 2017. I had delivered work-related talks at their 2001, 2002 and 2003 Meetings, and I had presented a poster at their 2004 Meeting, but I had not attended a Meeting since 2006.

Given that this year’s APHA Meeting theme is “Creating the Healthiest Nation: Climate Changes Health,” it appeared to be a perfect opportunity to advance the job search ball down the field. I thus submitted three abstracts, one for each of my three doctoral dissertation studies. To my surprise, two of them were accepted for oral presentation[2]. And as Meatloaf once sang, “two out of three ain’t bad.”

A few weeks ago, I began to pare the hour-plus-long PowerPoint presentation I had delivered at my doctoral defense down to two 12-minute-long talks. This meant  leaving out many interesting “sensitivity” analyses, including estimates of what my incident rate (IRR) and risk ratios (RR) would have been without exposure or outcome misclassification.

(For a rough translation of that last bit, please see here.)

Realizing how much important detail I was forced to remove from these PowerPoint presentations, I hit upon the idea of making all of the background materials (i.e., my actual dissertation and the PowerPoint defense presentation) publicly available.

And thus you find here:

  1. A PDF of the full text of my doctoral dissertation—Measures of Neighborhood Walkability and Their Association with Diabetes and Depressive Symptoms in Black Women—minus the Acknowledgments (to protect privacy) and CV[3].

Berger Doctoral Dissertation Dec 2014

  1. The PowerPoint presentation I delivered in defense of my dissertation (excluding the “thank you” slides). The last slide was originally this short clip showing the 10th Doctor towards the end of the 2005 episode “The Christmas Invasion.”

Berger Doctoral Defense 2014

  1. The PowerPoint presentations I will be delivering at the APHA Meeting (although not until after I have presented them on November 7 and November 8).

Matthew Berger Measurement Talk 11-7-2017

Matthew Berger Depression Talk 11-8-2017

But this begs a question.

Why haven’t I already published these studies in peer-reviewed epidemiology journals? Isn’t that the usual procedure?

And here we find the rabbit hole I found myself hurtling down as I wrote an earlier draft of this post.

*********

A few months after my successful defense (and once the final logistical requirements had been completed), I received an e-mail from a committee member asking, in effect, where the drafts of my articles were.

Technically, my doctoral dissertation was on track to be published in the ProQuest Dissertation and Theses Global database, where it currently resides.

That is not the same, however, as advancing science through a peer-reviewed publication process; I understood (and had a very high regard for) that then, and I still do now.

But in the spring of 2015, I was still wicked burned out from completing the doctorate itself (with all that had preceded it) while working full time and helping to raise a young family.

I also had higher priorities in my life at that time. My grant-funded Data Manager position was ending in June 2015, and I needed to a) complete the data analysis and final report for that project and b) search for a new gig (or so I thought at the time). My eldest daughter had her tonsils removed and needed a lot of parental TLC. And so forth.

In short, while I was perfectly happy to draft peer-reviewed journal articles from my three dissertation studies, I was not able to do so at that time.

Cutting right to the chase, the member of my doctoral committee and I engaged in an increasingly unpleasant e-mail exchange which ultimately ended in December 2015, when they decided no longer to pursue publication. The details of that exchange are irrelevant.

It is only now, however, that I understand what was really happening then.

For example, as I concluded my Data Analyst requirements, I was actively discussing a related, higher-level position with a different organization. Something kept holding me back, however, and I kept offering (sensible to me at the time) objections. I clearly never accepted that position.

Over the next two-plus years, as I applied to the few relevant positions I could find (58, although some of them were re-postings), my heart was simply never in the search. When I earned in-person interviews, I attended them with what you might call “subdued enthusiasm.” There was always some reason why this position was not quite right…even the last one, in March 2017, that seemed perfect when I first applied.

Even when I was twice offered exciting adjunct teaching positions (I would love to teach again), I ultimately talked myself out of both of them.

Do you see a pattern here?

What I have come to understand as I prepare for APHA, leading me to “publish” my doctoral dissertation here, is that my decision to change careers did not happen a few months ago. It happened, ironically, almost as soon as I walked out of that small meeting room on Albany Street in Boston on December 16, 2014.

In the perceived necessity to find a new position in my then-current career, supplemented with my newly-minted PhD, I could not comprehend, or accept, or grasp, that decision for another two-and-a-half years.

And so this post is not about reliving my unsettling communications with the members of my doctoral committee. It is about squaring a circle, or closing a loop, or whatever “completion” metaphor you prefer.

When I submitted those three abstracts to APHA in February, I was filled with optimism that the November Meeting in Atlanta would be just the place to rekindle my health-related data analysis spark, and where I would joyously engage in the networking necessary to land my next (first?) epidemiology-related position.

It turns out that it will actually be the last hurrah, the period at the end of a nearly 21-year-long sentence.

If you attend the APHA conference next week, I would be thrilled to have you listen to either or both of my presentations.

Otherwise….until next time…

[1] Upon completing my epidemiology doctorate, I finally (and successfully) applied to Harvard GSAS for the Master’s Degree I had earned before resigning.

[2] The incident diabetes study was not accepted.

[3] And, as far as I am concerned, this is tantamount to publication. Consider this passage from the BUSPH Epidemiology Doctoral Program Guidelines (2007, pg. 8): The research…must meet the current standards of publication quality in refereed journals such as American Journal of Epidemiology, American Journal of Public Health, Annals of Epidemiology, Epidemiology, International Journal of Epidemiology, Journal of the American Medical Association, and New England Journal of Medicine. It is understood that the thesis papers may be longer and have more tables and figures than permitted in published papers. Basically, once the members of my doctoral committee signed off on my doctoral dissertation, they were admitting that it already met those standards. Ergo

Positively pondering pesky probabilities, perchance

One inspiration to start this “data-driven storytelling” blog was the pioneering work of Nate Silver and his fellow data journalists at FiveThirtyEight.com; their analyses are an essential “critical thinking” reality check to my own conclusions and perceptions. Indeed, when I finally get around to designing and teaching my course on critical thinking (along with my film noir course), the required reading would include Silver’s The Signal and the Noise and a deep dive into Robert Todd Carroll’s The Skeptic’s Dictionary. I will also include Ken Rothman’s Epidemiology: An Introduction; what drew me to epidemiology (besides my long career as a public health data analyst) was its epistemological aspect. By that I mean how the fundamental methods and principals of epidemiology allow us to critically assess any narrative or story.

To that end, I have been reading with great interest Silver’s 11-part series that “reviews news coverage of the 2016 general election, explores how Donald Trump won and why his chances were underrated by most of the American media.” And while I highly recommend the entire series of articles, the September 21 conclusion is the jumping off point for my own observations about assessing the likelihood of various events.

**********

Let me begin with a passage from that article:

In recent elections, the media has often overestimated the precision of polling, cherry-picked data and portrayed elections as sure things when that conclusion very much wasn’t supported by polls or other empirical evidence.

I personally think investigative journalists are heroic figures who will ultimately save American democracy from its current self-induced peril. But they are trained in a very specific way: deliver the fact of a story with certainty and immediacy. In so doing, they are responding to media consumers with little patience for complex narratives suffused with uncertainty.

To quote Silver again, “a story can be 1. fast, 2. interesting and/or 3. true — two out of the three — but it’s hard for it to be all three at the same time.”

One narrative that developed fairly early about the 2016 presidential election campaign was that Democratic nominee Hillary Clinton was the all-but-inevitable victor. I wrote about one version of this flawed narrative here.

Reinforcing this narrative were election forecasts issued during the last weeks of the campaign that practically said “stick a fork in Trump, he is finished.” But as Silver rightly observes, some of these models were flawed because they failed to account for the “correlation in outcomes between [demographically similar] states.” For example, were Republican nominee Donald Trump to outperform his polls in Wisconsin on Election Day, he would likely also do so in Michigan, Minnesota and Iowa. And that is essentially what happened.

Still, because aggregating polls yields a more precise picture of the state of an election at a given point in time, I aggregated these 2016 election forecasts. Going into Election Day, here were some estimated probabilities of a Clinton victory, ranked lowest to highest.

FiveThirtyEight 71.4%
Betting markets 82.9%[1]
The New York Times Upshot 84.0%
DailyKos 92.0%
HuffingtonPost Pollster 98.2%
Princeton Election Consortium (Sam Wang) 99.5%

The average and median forecast was 88.0%. Remove the most skeptical forecast (though Clinton still a 5:2 favorite), and the average and median jump to 91.3% and 92.0%, respectively. By contrast, if you remove the least forecast, the average and median drop to 84.1% and 83.5%, respectively.

It is an understandable human tendency to look at a probability over 80% and “round up” from “very likely, but not guaranteed” to “event will happen.” And, under the frequentist definition of probability, we would be correct more than 80% of the time in the long run.

But we would not be correct as much as 20% of the time.

Ignoring Wang’s insanely optimistic forecast for various reasons, the “aggregate” forecast I had in mind on Election Day was that Clinton had about an 84% chance of winning.

The flip side, of course, was that Trump had about a 16% chance of winning.

A good way to interpret this probability is to think about rolling a fair, six-sided die.

Pick a number from one to six. The chance that if you roll the die, the number you picked will come up, is 1 in 6, or 16.7%.

On Election Day, Trump metaphorically needed to roll his chosen number…and he did.

But even if take the Wang-inclusive average of 88%, that is still a 1 in 8 chance. Throw eight slips of paper with the numbers one through eight written on them in a hat (I like fedoras, myself), pick one and draw. If your number comes up (which will happen 12% of the time over many draws), you win.

Trump picked a number between one and eight then pulled it out of our hypothetical fedora, and he won the election.

One way people misunderstand probability (and one of many reasons I am resolutely opposed to classical statistical significance testing) is mentally converting event x has a very low probability (like, say, matching DNA in a murder trial—only a 1 in 2 million chance!) with that event cannot happen.

So, even the Wang forecast—which gave Trump only a 1 in 200 chance of winning—did NOT mean that Clinton would definitely win. It only meant that Trump had to pull a specific number between one and 200 out of our hypothetical fedora. He did, and he won.

**********

On the other end of the spectrum is an overabundance of caution in assessing the likelihood of an event. This usually occurs when interpreting election polls.

In this post, I discussed Democratic prospects in the 2017 and 2018 races for governor.

One of the two governor’s races in November 2017 is in Virginia, where Democratic governor Terry McAuliffe is term-limited. The Democratic nominee is Lieutenant Governor Ralph Northam, and the Republican nominee is former Republican National Committee chair Ed Gillespie.

Here are the 13 public polls of this race listed on RealClearPolitics.com[2] taken after the June 13, 2017 primary elections:

Poll Date Sample MoE Northam (D) Gillespie (R) Spread
Monmouth* 9/21 – 9/25 499 LV 4.4 49 44 Northam +5
Roanoke College* 9/16 – 9/23 596 LV 4 47 43 Northam +4
Christopher Newport Univ.* 9/12 – 9/22 776 LV 3.7 47 41 Northam +6
FOX News* 9/16 – 9/17 507 RV 4 42 38 Northam +4
Quinnipiac* 9/14 – 9/18 850 LV 4.2 51 41 Northam +10
Suffolk* 9/13 – 9/17 500 LV 4.4 42 42 Tie
Mason-Dixon* 9/10 – 9/15 625 LV 4 44 43 Northam +1
Univ. of Mary Washington* 9/5 – 9/12 562 LV 5.2 44 39 Northam +5
Roanoke College* 8/12 – 8/19 599 LV 4 43 36 Northam +7
Quinnipiac* 8/3 – 8/8 1082 RV 3.8 44 38 Northam +6
VCU* 7/17 – 7/25 538 LV 5 42 37 Northam +5
Monmouth* 7/20 – 7/23 502 LV 4.3 44 44 Tie
Quinnipiac 6/15 – 6/20 1145 RV 3.8 47 39 Northam +8

Eight of these polls have Northam up between four and seven percentage points, including four of the last six. Two polls show a tied race. No poll gives Gillespie the lead.

And yet, here was the headline on Taegan Goddard’s otherwise-reliable Political Wire on September 19, 2017, referring to the just-released University of Mary Washington (Northam +5) and Suffolk polls (Even): Race For Virginia Governor May Be Close.

Granted, the two polls gave Northam an average lead of only 2.5 percentage points, which, without context, suggest a close race on Election Day. Furthermore, all three Political Wire Virginia governor’s race poll headlines since then have been on the order of: Northam Maintains Lead In Virginia.

Here is the thing, however. Most people (as I did) will equate “close” with “toss-up.” But there is a huge difference between “we have no idea who is going to win because the polls average out to a point or two either way” and “one candidate consistently has the lead, but the margin is relatively narrow.”

The latter is clearly the case in the 2017 Virginia governor’s race, with Northam’s lead averaging 4.4 percentage points in eight September polls within a narrow range (standard deviation [SD]=3.3). We are still more than five weeks from 2017 Election Day (November 7), so this is unlikely to be “herding,” the tendency of some pollsters to adjust their demographic weights and turnout estimates to avoid an “outlier” result (undermining the rationale for aggregating polls in the first place).

The problem comes when members of the media try to interpret the results of individual polls. They have absorbed the lesson of the “margin of error” (MoE) almost too well.

For example, the Monmouth poll conducted September 21-25, 2017 gives Northam a five percentage point lead, with a 4.4 percentage point MoE. Applying that MoE to both candidates’ vote estimates, we have 95% confidence that the “actual” result (if we had accurately surveyed every likely voter, not a sample of 499) is somewhere between Gillespie 48.4, Northam 44.6 (Northam down 3.8) and Northam 53.4, Gillespie 39.6 (Northam up 13.8). It is this range of possible outcomes, from a somewhat narrow Gillespie victory to a comfortable Northam win that leads members of the media to imply through oversimplification that this race will be close, meaning “toss-up.”

And yet, even within this poll, the probability (using a normal distribution, mean= 5.0, SD=4.4) that Northam is as little as 0.0001 percentage points ahead is 87.2%, making him a 7:1 favorite, about what Hillary Clinton was on Election Day 2016.

OK, maybe that was not the best example…

But when you aggregate the eight September polls, the MoE drops to about 1.3[3], putting the probability Northam is ahead at well over 99%. Even if the MoE only dropped to 3.0, the probability of a Northam lead would still be about 93%.

My point is this. Every poll needs to be considered not just as an item in itself (polls as NEWS!) but within the larger context of other polls of the same race. And in the 2017 Virginia governor’s race, the available polling paints a picture of a narrow but durable lead for Northam.

I have no idea who will be the next governor of Virginia. But a careful reading of the data suggests that, as of September 29, 2017, Lt. Governor Ralph Northam is a heavy favorite to be the next governor of Virginia, despite being ahead “only” 4 or 5 percentage points.

**********

Finally, here is an update on this post about the Democrats’ chances of regaining control of the United States House of Representatives (House) in 2018.

Out of curiosity, I built two simple linear regression models. One estimates the number of House seats Democrats will gain in 2018 only as a function of the change from 2016 in the Democratic share of the total vote cast in House elections. The Democrats lost the total 2016 House vote by 1.1 percentage points, so if they were to win the 2018 House vote by 7.0 percentage points, that would be an 8.1 percentage point shift.

Right now, FiveThirtyEight estimates Democrats have an 8.0 percentage point advantage on the “generic ballot” question (whether a respondent would vote for the Democratic or the Republican House candidate in their district if the election were held today).

My simple model estimates a pro-Democratic House vote shift of 9.1 percentage points would result in a net pickup of 26.7 House seats, a few more than the 24 they need to regain control. The 95% confidence interval (CI) is a gain of 17.0 to 36.4 seats.

But the probability that Democrats net AT LEAST 24 House seats is 71.1%, making the Democrats 5:2 favorites to regain control of the House in 2018.

My more complex model adds a variable that is simply 1 for a midterm election and 0 otherwise, as well as the product of this “dummy” variable and the change in Democratic House vote share. I hypothesized (correctly) that this relationship would be stronger in midterm elections.

This model estimates that a 9.1 percentage point increase from 2016 in the Democratic share of the House vote would result in a net gain of 31.8 seats. However, with two additional independent variables (and only 24 data points), the 95% CI is much wider, from a loss of 7.0 seats to a history-making gain of 68.3 seats.

Still, this translates to a 66.1% probability (2:1 favorites) the Democrats regain the House in 2018.

Figure 1 shows the estimated probability the Democrats regain the House in 2018 using both models and a range of percentage point changes in House vote share from 2016.

Figure 1: Probability Democrats Control U.S. House of Representatives After 2018 Elections Based Upon the Change in Democratic Share of the House Vote, 2016-18

Democratic Probability 2018 House capture

The simple model (blue curve) gives the Democrats no chance to recapture the House in 2018 until the pro-Democratic change in vote share reaches 6.5 percentage points, after which the probability rises sharply and dramatically to a near-certainty at the 10.0 percentage point change mark. The more complex model (red curve), meanwhile, assigns steadily increasing chances for the Democrats, flipping to “more likely than not” at the 7.0 percentage point change mark; even at a truly historic 15 percentage point change, the complex model only gives the Democrats an 85.3% chance to recapture the House in 2018.

For the record, I lean toward the more complex model.

It is worth noting that in the current FiveThirtyEight estimate, 15.8% of the electorate is undecided or chose a third party candidate (when an option). If the undecided vote breaks heavily toward the party not controlling the White House in a midterm election (one way electoral “waves” form), a 66-71% would likely be an underestimate of the Democrats’ chances of regaining control of the House in 2018.

And…apropos of nothing…Happy 51st Birthday to me (September 30, 2017)!!

Until next time…

[1]  To be honest, I do not recall where I got this number from…possibly from fivethirtyeight.com or maybe from https://betting.betfair.com/politics/us-politics/…

[2] Accessed September 28, 2017

[3] The total number of voters sampled across these eight polls is 4,915, which is 9.85 times higher than the 499 sampled in the Monmouth poll. The square root of 9.85 is 3.14. Dividing 4.1 by 3.14 gives you 1.31.

Using Jon Ossoff polling data to make a point about statistical significance testing

I do not like the phrase “statistical dead heat,” nor do I like the phrase “statistical tie.” These phrases oversimplify the level of uncertainty accruing to any value (e.g., polling percentage or margin) estimated from a sample of a larger population of interest, such as the universe of election-day voters; when you sample, you are only estimating the value you wish to discern. These phrases also reduce quantifiable uncertainty (containing interesting and useful information) to a metaphorical shoulder shrug: we really have no idea which candidate is leading in the poll, or whether two estimated values differ or not.

For example, a poll released June 16, 2017 showed Democrat Jon Ossoff leading Republican Karen Handel 49.7% to 49.4% among 537 likely voters in the special election runoff in Georgia’s 6th Congressional District. The margin of error (MOE) for the poll was +/-4.2%, meaning that we are 95% confident that Ossoff’s “true” percentage is between 45.5 and 53.9%, while Handel’s is between 45.2% and 53.6%.

In other words, these data suggest a wide range of possible values, anywhere from Ossoff being ahead 53.9 to 45.2% to Handel being ahead 53.6 to 45.5%. In fact, there is a 5% chance that either candidate is further ahead than that. Finally, because random samples such as these are drawn from a normal (or “bell curve”) distribution, percentages closer to those reported (Ossoff ahead 49.7 to 49.4%) are more likely than percentages further from those reported.

But this is a lot to report, and to digest, so we use phrases like “statistical dead heat” or “statistical tie” as cognitive shorthand for “there is a wide range of possible values consistent with the data we collected, including each candidate having the exact same percentage of the vote.”

Each phrase has its roots in classical statistical significance testing. The goal of this testing is to assure ourselves that any value we estimate from data we have collected (a percentage in a poll, a relative risk, a difference between two means) is not 0.

To do so, we use the following, somewhat convoluted, logic.

Let’s assume that the value (or some test statistic derived from that value) we have estimated actually is 0; we will call this the null hypothesis. What is the probability (we will call this the “p-value”) that we would have obtained this value/test statistic or one even higher purely by chance?

Got that?

We are measuring the probability—assuming that the null hypothesis is true—that a value (or one higher) was obtained purely by chance.

And if the probability is very low, it would therefore be very unlikely that we have gotten our value purely by chance, so it must be the case that we did NOT get it by chance. And so we can “reject” the null hypothesis (even though we assumed it to be true to arrive at this rejection), given that value that we got was so high.

The higher the probability, the more difficult it becomes to reject the null hypothesis.

By a historical accident,  any p-value less than 0.05 is considered “statistically significant,” meaning that we can reject the null hypothesis.

Of course, we REALLY want to know how probable the null hypothesis itself is, but that is a vastly trickier proposition.

Or, even better…we REALLY want to know how likely the actual value we observed is.

Think about it. All we are really learning from classical statistical significance testing is either “our value is probably not 0” or “we can’t be certain that our value is not 0…it just might be.” This tells us nothing about the quality of the actual estimate we obtained is, how near the “true” value it actually is.

Now, to be fair to the 0.05 cut-point for determining “statistical significance,” it does have an analogue in the 95% confidence interval.

The 95% confidence interval (CI) is very similar to the polling MOE discussed earlier. It is a range of values (often calculated as value +/-1.96*standard error[1]) which we are 95% confident includes the “true” value.

Let’s say you estimate the impact of living in a less walkable neighborhood relative to living in a more walkable neighborhood on incident diabetes over 16 years of follow-up. Your estimate is 1.06 (i.e., you have 6% higher risk of contracting diabetes), with a 95% CI of 0.90 to 1.24. In other words, you are 95% confident that the “true” effect is somewhere between a 10% decrease in incident diabetes risk and a 24% increase in incident diabetes risk.

Ahh, but this is where that pesky cognitive shorthand comes back. See, that 95% CI you reported includes the value 1.00 (i.e., no effect at all). Therefore, there is likely no effect of neighborhood walkability on incident diabetes.

No, no, a thousand times no.

It simply means that there is a specified range of possible measures of effect, only one of which is “no effect.” In fact, the bulk of the possible effects are on the risk side (1.01-1.24), rather than on the “protective side” (0.90-0.99).

Just bear with me while I come to the point of this statistical rigmarole.

Early this morning, I posted this on Facebook:

The election-eve consensus is that the Jon Ossoff-Karen Handel race (special election runoff in Georgia’s 6th Congressional District) is a dead heat, with Handel barely ahead. This consensus is based in large part on the RealClearPolitics polling average (Handel +0.25). However, the RCP only looks at the most recent poll by any given pollster, and only within a very narrow time frame

Hogwash (for the most part).

All polls are samples from a population of interest, meaning that you WANT to pool recent polls from the same pollster (each is a separate dive into the same pool using the same methods). Also, I found no evidence that the polling average has changed much since the first election April 19

My analysis (90% hard science, 10% voodoo) is that Ossoff is ahead by 1.4 percentage points. Assume a very wide “real” margin of error of 9 percentage points, and Ossoff is about a 62% favorite to win today. 

Meaning, of course, that there is a 38% chance Handel wins

That is still a very close race, but I would give Ossoff a small edge

And, bloviating punditry aside, for Ossoff even to lose by a percentage point would be a remarkable pro-Democratic shift for a Congressional seat Republicans have dominated for 40 years.

Polls close at 7 pm EST. 

Here is the full extent of my reasoning.

I collected all 12 polls of this race taken after the first round of voting on April 20, 2017. Four were conducted by WSVB-TV/Landmark and showed Ossoff ahead by 1 percentage point (polling midpoint 5/31/2017), 3 (6/7), 2 (6/15) and 0 (6/19) percentage points. Two each were conducted by the Republican firm Trafalgar Group (Ossoff +3 [6/12], Ossoff -2 [6/18]) and by WXIA-TV/SurveyUSA (Ossoff +7 [5/18], even [6/9]). Other polls were conducted by Landmark Communications (Ossoff -2 [5/7]), Gravis Marketing (Ossoff +2 [5/9]), the Atlanta Journal-Constitution (Ossoff +7 [6/7]) and Fox 5 Atlanta/Opinion Savvy (Ossoff +1 [6/15]).

On average, these polls show Ossoff ahead by an average of 1.85 percentage points.

Using a procedure I suggest here, I subtracted the average of all other polls from those from a single pollster. For example, the average of the four WSVB-TV/Landmark was Ossoff +1.5, while the average of the other eight polls was Ossoff +2.0. This difference—or “bias”—of -0.5 percentage points shows the WSVB-TV/Landmark polls may have slightly underestimated the Ossoff margin.

I then “adjusted” each poll by subtracting its “bias” from the original polling value (e.g., I added 0.5 to each WSVB-TV/Landmark Ossoff margin). For convenience, I lumped the pollsters releasing only one poll into a single “other” category; its “bias” was only 0.2.

The “adjusted” Ossoff margin was now +1.865.

To see whether the Ossoff margin had been increasing or decreasing monotonically over time, I ran an ordinary least squares (OLS) regression of Ossoff margin against polling date midpoint (using the average, if polls had the same midpoint date). There was no evidence of change over time; the r-squared (a measure of the variance in Ossoff margin accounted for by time) was 0.01.

Still, out of a surfeit of caution, I decided to assign a weight of “2” to the most recent poll by WSVB-TV/Landmark, Trafalgar Group and WXIA-TV/SurveyUSA and a weight of 1 to the other nine polls.

Using the bias-adjusted polls and this simple weighting scheme, I calculated an Ossoff margin of 1.38, suggesting recent tightening in the race not captured by my OLS regression[2].

So, let’s say that our best estimate is that Ossoff is ahead by 1.38 percentage points heading into today’s voting. There is a great deal of uncertainty around this estimate, resulting both from sampling error (an overall MOE of 2.5 to 3 percentage points around an average Ossoff percentage and an average Handel percentage, which you would double to get the MOE for the Ossoff margin—say, 5 to 6 percentage points) and the quality of the polls themselves.

Now, let’s say that our Ossoff margin MOE is nine percentage points. I admit up front that this is a somewhat arbitrary MOE-larger-than-6-percentage-points I am using to make a point.

In a normal distribution, 95% of all values are within two (OK, 1.96) standard deviations (SD) of the midpoint, or mean. If you think of the Ossoff margin of +1.38 as the midpoint of a range of possible margins distributed normally around the midpoint, then the MOE is analogous to the 95% CI, and the standard deviation of this normal distribution is thus 9/1.96 = 4.59.

To win this two-candidate race, Ossoff needs a margin of one vote more than 0%. We can use the normal distribution (mean=1.38, SD=4.59) to determine the probability (based purely upon these 12 polls taken over two months with varying quality) that Ossoff’s margin will be AT LEAST 0.01%.

And the answer is…61.7%!

Using a higher SD will yield a win probability somewhat closer (but still larger than) 50%, while a lower SD will yield an even higher win probability.

Here is the larger point.

It may sound like Ossoff +1.38 +/-9.0 is a “statistical dead heat” or “statistical tie” because it includes 0.00 and covers a wide range of possible margins (Ossoff -7.62 to Ossoff +10.38, with 95% confidence), but the reality is that this range of values includes more Ossoff wins than Ossoff losses, by ratio of 62 to 38.

You can reanalyze these polls and/or question my assumptions, but you cannot change the mathematical fact that a positive margin, however small and however large the MOE, is still indicative of a slight advantage (more values above 0 than below).

Until next time…

**********

This is an addendum started at 12:13 am on June 21, 2017.

According the New York Times, Handel beat Ossoff by 3.8 percentage points, 51.9 to 48.1%. My polling average (Ossoff+1.4) was thus off by -5.2 percentage points. That is a sizable polling error. RealClearPolitics (RCP) was somewhat closer (-3.6 percentage points), while HuffPostPollster (HPP) was the most dramatically different (-6.2 percentage points).

Why such a stark difference? And why was EVERY pollster off (the best Handel did in any poll was +2 percentage points, twice)?

I think the answer can be found in a simple difference in aggregation methods. RCP used four polls in its final average, with starting dates of June 7, June 14, June 17 and June 18, and their final average was Handel+0.2. HPP, however, included no polls AFTER June 7, and their final average was Ossoff+2.4, a difference of 2.6 percentage points in Handel’s favor.

Moreover, Handel’s final polling average was 2.1 percentage points higher in RCP (49.0 vs. 46.9%), while Ossoff’s final polling average was only 0.5% lower (48.8 vs. 49.3%).

In other words, over the last week or so of the race, Handel was clearly gaining ground, while Ossoff was fading slightly.

What could have caused this shift?

On the morning of June 14, 2017, a man named James T. Hodgkinson opened fire on a group of Republican members of Congress, members of the Capitol Police and others on an Alexandria, Virginia baseball diamond. Mr. Hodgkinson, who claimed to have volunteered on Senator Bernie Sanders’ 2016 presidential campaign, appeared to be singling out Republicans for attack; he had posted violent anti-Trump and anti-Republican screeds on his Facebook page.

When this ad, brazenly (and absurdly) tying Ossoff to the left-wing rage and violence deemed responsible for the Alexandria shooting, started playing in Georgia’s 6th Congressional District, I thought it was a despicable and desperate attempt to save Handel from a certain loss.

But the overarching message of “blame the left” appears to have resonated with district residents who otherwise may not have voted. The final poll of the campaign found that “…a majority of voters who had yet to cast their ballots said the recent shootings had no effect on their decision. About one-third of election-day voters said the attack would make them ‘more likely’ to cast their ballots, and most of those were Republican.”

It is conceivable that this event changed a narrow Ossoff win into a narrow loss, as disillusioned Republicans decided to cast an election-day ballot for Handel in defense of their party. While Ossoff won the early vote by 5.6 percentage points (and 9,363 votes), he lost the election day vote by a whopping 16.4 percentage points (and 19,073 votes).

Ossoff may well have lost anyway, for other reasons: his non-residence in the district, the difference between Republican opposition to Trump and support for mainstream Republicans, the amount of outside money which flowed into the district (making it harder for Ossoff to cast himself as a more centrist, district-friendly Democrat; the Democrat in the most expensive U.S. House race in history lost by a larger margin [3.8 percentage points vs. 3.2 percentage points] than the Democrat in the barely-noticed South Carolina 5th Congressional District special election held the same day) and his inexperience as a politician.

But the fact that Handel herself cited the Alexandria shooting in her victory speech (starting at 03:23) speaks loudly about why SHE thinks she won the election.

Until next time…again…

[1] Itself usually calculated as standard deviation divided by the square root of the sample population.

[2] Other recency weighting schemes yielded similar results.