Fear in the Time of Corona

As I write this, government leaders are listening to epidemiologists (well, not all “leaders”) and “cancelling everything.” This is a prudent course of action, especially while we learn more about the novel coronavirus and gather evidence about COVID-19. Knowing more about how fatal it is will help us all make sound choices about how to proceed from here.

But how cautious are we prepared to be? If, for example, we learn that the fatality rate is lower than currently feared (1-5%), and more closely resembles the seasonal flu (if slightly higher), how will we proceed?

I’ll be looking to leadership by American Governors to make good choices, and hoping they act in coordination with fellow State leaders.

I’ll expect Federal government to make major fiscal missteps that benefit wealthy Americans at the expense of poor Americans. I wish this was not the case, but it is (and racist as well).

I hope Americans will learn to take the public good more seriously in the future, especially with regard to universal healthcare and paid sick leave on par with the world’s most compassionate nations.

And, lastly, I hope we continue to vote out liars, warmongers, and thieves.

Is Curiosity Fleeting, or Worse?

From an evolutionary perspective, there is a clear reason why animals would seek out information: it can be vital to their survival and reproduction… Another possibility is that evolutionary pressures have made information intrinsically rewarding. – From HuffPost

For educators with an interest in enhancing the truthiness of society, the present is a good time for reflection on gaps between our shared myths and the truth. I’m particularly worried about the negative effects of a myth close to the heart of educators: the idea that humans are naturally curious.

Curiosity is certainly valuable. The article Curiosity is Fleeting, but Teachable by Bryan Goodwin is a nice overview of the relevance of a discussion about curiosity to educators. He summarizes recent research:

A recent meta-analysis concluded that together, effort and curiosity have as much influence on student success as intelligence does (von Stumm, Hell, & Chamorro-Premuzic, 2011). Other studies have linked curiosity to better job performance (Reio & Wiswell, 2000); greater life satisfaction and meaning (Kashdan & Steger, 2007); and even longer lives (Swan & Carmelli, 1996).

But perhaps more troublingly:

The longer children stay in school, the less curiosity they tend to demonstrate (Englehard & Monsaas, 1988).

Psychological research suggests that while humans start life as seemingly curious, environmental influences can diminish it. I think this suggests that we are minimally curious, which should be thought of as closer to information-seeking rather than knowledge-seeking. Information-seeking behavior seems highly related to or plodding around the globe with a focus on survival and reproduction, and may have become part of our nature through the process of natural selection. More information, more survival. Can the same be said for curiosity?

If there is nothing natural about curiosity, then it is a mistake to think that children (or people of any age) are going to be motivated by it. Students might ask about understanding and knowledge: what’s in it for us?

I think the answer has to be, “better tools.” Curiosity, imagination, and understanding are closely linked in the history of tool-making. Approaching curiosity as a learned behavior is a good step toward designing pedagogy to inspire imagination and motivate the process of understanding.

Living Versus Imagining

3pmxEidpWhat if one of the make-or-break achievements in life is learning how to grapple with the following challenge:

Live in the present, but imagine in the future.

What if that is much easier said than done? What does it take to really imagine in the future? How does one really assess the “present”?

What if the desire to align one’s actions/behaviors to an imagined future is really counterproductive? What if I could be undertaking much more productive projects if I committed more fully to a near-term agenda? (Is diversifying one’s actions a matter of hedging against an unknown future?)

And what if, instead of trying to imagine my way out of the present, I let me imagination wander more freely? What if I made grander assumptions about the future? Would that in fact help me choose better projects in the present? (Isn’t this really what I already do—but not really with much self-awareness?)

What’s a better direction to push in? Connect the present with the imagined? Or disentangle them further?

Isn’t it also a bit of a paradox to live without imagining the future? Where does the absurdity kick in? When I try to align my actions to things that haven’t yet taken place but could transpire in 10, 20, 30 or 100 years? And can one align one’s actions to things that seem unlikely to ever transpire? Would this be considered rational behavior?

It’s the economy.

A simple example that inspires this meditation is how financial markets allow investors to place bets on the future, thereby enabling businesses to use capital to make that future more likely to come transpire. Or money itself, really—an invention of human imagination that enables humans to align their actions in innovative and world-changing ways. We are able to use imagination to change the future—literally building the living conditions and constraints of not-yet-even-born humans.

This is both very banal (we determine the future!) and operationally unsettling (the quality of our imagination can determine every aspect of human livelihood!) in this matter. Particularly: how much can any one human really contribute to this reality-bending? And in a deeply pro-capitalist, anti-humanist society, how is the scale and scope of one’s contribution directly tied to their wealth?

We’re doomed?

What if we humans are just not that good at imagining complex things? (Or just not that good of thinking in general?) Or what if the humans that are good at imagining are systematically selected against (to lean on evolutionary terminology) when capital is distributed? Or what if the selection process that would eventually promote “good imaginers” (obviously a loaded notion) is just too slow?

 

On Moral Leadership

NYTCREDIT: Chang W. Lee/The New York Times
NYTCREDIT: Chang W. Lee/The New York Times

On this day, hundreds of thousands of people are marching together throughout the world to protest Donald Trump’s inauguration yesterday. I write in sympathy with these marchers, with the hope of creating more understanding between the 63 million Americans who voted for Trump, and the 66 million who did not. (Yeah, it’s a long shot, so I’ll try to keep it short.)

Here, I want to acknowledge Donald Trump as a moral leader. I think that we “on the left” don’t create enough space in everyday conversation to allow for this. We tend to get stuck on the immoral (or even just amoral) actions we’ve witnessed, and lash out with the claim that “he is not moral,” and so on. This kind of communication is likely to underscore many of the demonstrations today.

But I’ve done a lot of thinking about morality in the course of my education, and I think we should acknowledge that there are many visions of the good. Action that is in line with such visions are generally regarded as “moral.” Groups of people vie for the moral high ground—the argumentative advantage that their good is the good. When history settles, the winner gets to “write it,” as the saying goes. Prematurely then, we hope we are the victors, but sometimes we are not.

I think it may be unwise to pursue this moral position in the time of Trump. (Perhaps just too late.)

A more pluralistic understanding of morality has the consequence of raising the bar on our descriptions of the good. We have to say more about what we want, what it means, and why it deserves to be part of our vision. Of course we do this; we do it all the time. It’s the kind of talk we all look for in a visionary leader. But—and be honest now—when was the last time you sat down with a spreadsheet and charted out all the pieces of your vision, how they are connected, and what the costs are of achieving them? It’s the kind of thing we generally do shorthand (e.g., pulling bits from the news or op-ed pieces), allow others to do for us (re: especially “the political class”), or maybe even forget to do.

I think the cost of this omission of tallying the sum total of our vision of the good (assuming we even have one, or only one), is higher than we think. If, for example, our vision isn’t as coherent as we think it is, then we need to be more open to criticism. My suspicion is that many people voted for Trump because Clinton seemed to smug and sure of herself—and not particularly what she said or how she said it, but how her representation of policies didn’t sit well with the people actually experiencing economic despair.

Or, in other words, the Culture War maybe played a smaller role in Clinton’s loss than we think. Yet I’m not making an “It’s the economy, stupid” argument. I think the problem is about articulating a coherent vision of the good. I think it’s what Obama was able to do, though I think it’s fair to say he spent down most of the “capital” the Left has—for better or worse—pursing a diverse, meaningful agenda that unfortunately was not seen as doing enough fast enough for many Americans (well okay, maybe in a hasty sense it’s an “It’s the economy, stupid” moment). I don’t know if it was possible to do more, but he certainly didn’t go out of his way to cooperate with the Republican-led Congress.

So here came Trump with an alternative vision of the good. Racist. Sexist. Anti-immigrant. Isolationist. Anti-media. Anti-science. Anti-democratic. Fascist. But importantly: distinctly alternative.

It’s a vision nonetheless. It’s not even particularly coherent; I’m not sure how one can hold a coherent vision that’s anchored in an anti-science denial of global warming. But it was different than visions afforded by the Democrats. It was starkly different from even most visions outlined by more traditional Republicans. It was essentially an anti-establishment vision, and he wowed enough Americans to rise to power.

Philosophically, then, I acknowledge Donald Trump as a moral leader in a weak sense—allowing for room that his vision is compelling for some people as surely as other leaders inspire others. To acknowledge this is to step (however unwillingly) into a different political landscape than we’ve become accustomed to. I think it means we should at least contemplate abandoning the competition for “moral leadership” in a strong sense—meaning that we are somehow striving towards ultimate agreement and understanding, and a unanimously-shared view that a singular vision of the good has once-and-for-all risen above all others. As in the Christian-Judeo sense.

I think it’s important for the Left to start now from a different place. We should, instead, be focused on how the policy positions Trump represents (or, indeed, is unable to define) differ from our own. Particularly how we think they will lead to outcomes that we find undesirable. Once we’ve agreed on how to articulate that, we need to be more strategic in enacting communication that directs attention to our visions.

Yes, my heart is with the pussyhat, but my mind charts a somewhat different course for future moral leaders to help us achieve justice around the globe.

This short essay was drafted in an afternoon. I hope to be able to clarify and expand it over time!

What’s Not to Like about Innovation?

Like creativity, innovation is a diffuse concept that requires a significant amount of rehabilitation to be used in an effective, precise way. The two concepts are indeed often intertwined. But I would want to argue that “innovation” is analogous to corporate personhood—and deserving of the same liberal ire.

OK, let me unpack this a bit. First, it appears as if organizations more often (are said to) seek innovation whereas individuals seek creativity. To wit: “innovation will lead us to the next big product.” Creativity seems to align better with masterpieces and experiments.

Innovation is “new;” creativity is “original.”

Both these statements drive me a bit bonkers (insofar as they are often unsubstantiated) , but can of course be meaningful and profound. But are these perhaps two sides of the same coin? Or should they be understood entirely differently?

Shouldn’t they be viewed analogously to persons and corporate persons —one aspirational, and the other antagonistic to the aspiration?

Yet at every turn both concepts will resist definition. Is innovation about technological change? Well, not exactly. Is creativity about imagination? Well, again, not exactly. An essay would have to focus on a broad-yet-common conceptualization of each term, and locate historical uses that exemplified their similarities and contrasts.

Could pitting them against each other be instrumental in expressing value for humanity over technology-fo-technologies-sake? Maybe!

It seems worth trying.

Will Educators Own the Future?

Likely not.

I just finished reading Jaron Lanier’s ‘Who Owns the Future?‘—about a year after the rest of the world, it turns out—and I’m not optimistic.

It was an excellent read, especially due to Lanier’s broad experience with technologies and his interest in economics. He offers educators a lot to think about, such as:

Will teaching be a middle class job (at least) in the future?

Will humans even be paid to teach?

How will education be limited by software? And how will that software hide the contribution of humans?

These are some questions at the core of his ruminating, and the thesis of the book (that the world is generally headed in the wrong direction with respect to how networks are designed and used) opened up these questions in new ways for me.

I am afraid I am quite sympathetic to his worries. Unfortunately his bleak vision of the future isn’t well-balanced by his ideas for how to mitigate the present dangers of technology and create a better world for humans.

In general, I’d like to think I’m working on a solution just by working in the education sector. But Lanier gives me pause, and a lot to think about.

Waiting on moral excellence

It’s been a while since I’ve worked on my essay on retirement and moral achievement. Originally, I set out with two goals. Firstly, I wanted to be pragmatic about the purpose of the essay (“It’s for the Baby Boomers”), and write with an appropriate sense of urgency. Secondly, I wanted to settle some philosophical scores by blending normative ethics with meta ethics to arrive at a satisfying kind of self-reflection (on the part of the reader). Unfortunately, this pragmatic aim unravels pretty fast as the reader is left to grapple with the irony that the unfinishedness of life is perhaps only surpassed by the inability of philosophy (language itself?) to surmount it.

I still think it’s a worthwhile project. To make progress, perhaps I need a better focus: either I should work on a more tractable philosophical problem or go more boldly into the charlatanry of “self-help” literature. Alas, these are equally tempting options!

A short overview of the essay:

From a great array of possible lives, we have so far, for better or worse, each arrived at one life. But despite a Romantic (if thin) conception of self, the kind of ethics we live by are likely best described as diversefragmented, and incomplete. Why is this the case? And, more to the point, what does it—let’s call it fragmentedness—mean for us? Does it mean we will not be able to be happy, successful, or wise? Or does it mean we may be all those things but that we may be unable to escape the doubt that we are not so? To better understand the experience of retirement, we must first develop a kind of “double vision” of ethics—seeing at once that fragmentedness may be necessary, as well as coming to believe that it is essentially untenable. We will also encounter the deep-seated philosophical problem of whether or not there are such things as “moral facts.”

Gauguin’s artistic quest to achieve moral excellence

The following excerpt is from an unpublished essay on Gauguin (the artist) and Genius (the concept). First explored in a chapter in my doctoral dissertation, my argument is that Gauguin was a highly moral person – in spite of his sometimes reckless and irresponsible actions. I think this is an important counterintuitive case, for it allows us to consider how different (and sometimes competing) aspects of personhood define not only our own conception of morality, but moral philosophy itself.

From the introduction:

Why did middle-aged Paul Gauguin abandon his family and social context to live as a poor, reclusive painter? Could a moral conception be said to have sealed his fate to live in Tahiti as an estranged and unhealthy expatriate until his untimely death? The answer may lie in his artistic oeuvre, which includes over forty self-portraits in several different mediums, including more than twenty oil paintings. Here I argue that his self-portraits, in conjunction with his self-reflection in many letters to his wife and friends, form evidence that a conception of artistic genius became a touchstone of his art and life — a comprehensive conception of “goodness” that shaped his reception of tradition and transformed his whole life into a mythic quest.

Further evidence comes in the form of philosophical context. Romanticism and religion, two influential social currents of the Parisian artworld, fed into Gauguin’s perception of himself as an artist — he was, after all, first persuaded to take his painting seriously by his contemporaries. Reflection on spirituality became a prominent feature of his “artistic consciousness,” and became a theme that ran through his work in self-portraiture. This reflection, against a backdrop of Romantic and religious imagery, led Gauguin to discover a concept of artistic genius — heightened by Romanticism’s obsession with aesthetic transcendence — that especially propelled his artistic and spiritual quests.