The Crosshairs of the Split Hairs: #digciz

This week, Mia Zamora and I are kicking off #digciz 2017 with a conversation about digital citizenship, and what it means in a world wherein “the digital” is increasingly a delivery system for surveillance and spectacle and amplified uncertainty.

(Or maybe that’s just my take. Maybe it’s different in your world. Maybe you see it differently? I would pay big cash money to see it differently so I am open to being invited over. Please note I actually have no big cash money.)

In any case, the month of June will be a #digciz-fest of epic proportions *if* y’all come out and play, and Mia and I have the privilege of leading us all out of the gate with a few provocations and a #4wordstory conversation about what good citizenship means in participatory spaces.

Here’s my opening salvo. :) (I’m a bit of a shit about the word “citizenship”…)


Grumpy Cat, ultimate Digital Citizen, makes the perfect Rorschach Test for your own interpretations of digital citizenship! Is he saying:

a) Even online, we are all people. Kumbay-effing-yah.
b) Digital citizenship sucks because people. People suck.
c) It is irritating to even have this conversation. Stop being a digital dualist.
d) All of the above?
***

I myself am still not entirely sure. A little over a month ago, I wrote up a talk on citizenship and identity that I didn’t manage to explain very articulately…and got comments like it was 2008 up in here.

Note to self: PUBLISH HALF-BAKED LIGHTBULB MOMENTS MORE OFTEN.

Or rather: publish half-baked lightbulb moments more often, *if* you like the hit of attention/engagement/validation that comments apparently still provide, even years after blogs were supposed to be dead.

(Disclosure: I am actually that person who likes the hit of attention/engagement/validation that comments apparently still provide, just if there were any questions. But we do not acknowledge that publicly, do we? Because decorum. Or the games we play around palatable identities in an attention economy.)

Only my discomfort with totally half-baked posts – or the rarity of lightbulb moments in my life – will save y’all from wanton comment-chasing, folks.

Which brings me back to the actual lightbulb moment that I’d had in the middle of that talk I tried to write up.

Digital platforms and digital affordances – underpinned by the capitalist enclosure of participatory digital spaces over the last decade or so, with its surveillance and metrics and constant advertising of reductive versions of our identity back to us – do NOT lend themselves to good digital citizenship, in the sense that they do not foster a space I would actually want to be a citizen of, to whatever (limited) extent the citizenship model holds when conceptualized in the border-free digital realm.

They do not lend themselves to good digital citizenship because they shape and direct human behaviour in ways that privilege capital and circulation and extremes, rather than, say, collaboration or empathy. Or even just being alone with one’s thoughts.

They increasingly shape the logic of our learning spaces to Silicon Valley’s concept of what that should be. Spoiler: that tends to be “individualism, neoliberalism, libertarianism, imperialism, the exclusion of people of color and white women.”

They foster spectacle and scale and virality and dogpiles and dragging and while there are moments of justice and glory in it all, at its logical endpoint it’s a Hunger Games.

(Wait. Mixed my dystopias.)

Of course, if you’re reading this, chances are good you don’t live in that web, that digital space. Humans have agency. Technologies or platforms are not determinist. I read my comments section. Got it.

Most of the people who’ll ever click on this post will come to it through the variety of still-quite-participatory communities that form my network, and our collective constellation of “digital” remains very much not-entirely-subsumed by capitalism and spectacle. We resist. We share. We care.

My research was really clear about the caring, the ways in which we make ourselves vulnerable to each other, even in the strange collapsed contexts of academic Twitter.

I’d venture that in most digital spaces that build any sense of ongoing community over time, people do the same. That‘s why I’m a bit of a shit about citizenship.
***

The web, and the capacity of strangers to receive my words – all of them, even the ugly ones or the half-baked ones or the things I couldn’t say out loud – once gave me back some sense of myself as being able to contribute to a world I wanted to live in.

It gave me a sense of being a citizen – in the rights and responsibilities sense, in the belonging sense – of something I was invested in more than I’ve ever, frankly, been invested in the concept of Canada as a nation-state (no matter how much Trump has done recently to make me appreciate that particular concept and its vulnerability, AHEM).

But the operations of scale and visibility and capital – especially capital – mean that our platforms keep creeping up on us, shifting, creating all kinds of insidious ways to monetize our caring and our sharing and in doing so, shape how we relate to each other…and in the long run, who we get to be in relation to these digital spaces.


And nope, #NotAllPerformativity is negative and #NotAllSoCalledSlacktivism is empty, but platform-based and -driven behaviours that shape our sense of personal identity should be things we’re watching WAAAAY more closely than we seem to know how. Not just because we may be frogs boiling slowly towards whatever Mark Zuckerberg’s end game of world domination may be…but because polarization seems to be eating us alive, as a broader society, online and off.

The fracturing of social bonds and security is not digital. The inequality and uncertainty at the root of it is not digital.

But it all leaves us…confronted. Constantly confronted.

And the digital amplifies our confrontedness.

The digital demands constant signalling. Other people’s signalling confronts us. We create spaces to bond over that confrontedness. Performative wokeness devolves into factionalism. White supremacy festers its way into the open.

This seems to be the yearbook quote of humanity confronted by virtue signalling:

And then, as a FB friend quipped in the thread under my earlier identity/citizenship post…we get caught “in the crosshairs of the split hairs.” THAT.

I think THAT should be 2017’s yearbook quote.

And because we are human, we don’t even always completely notice the way our identities are being shaped by our social environments and what they naturalize…THEY JUST BECOME OUR REALITY.
***

So…what can we do? How can we envision and work toward something better? What kinds of civic and social spaces do we want, online?

Tell us your #4wordstories of what YOU want, using the hashtag #digciz.

The conversation will unfold for 48 hours or so, through June 1st. Or whenever we’re done. Our goal is to get a sense of what people think digital citizenship can be, but also to hash out some of the constraints and realities that shape what it is, for most of us. And what works. What we could be aiming for, as a model of human engagement.

Just a little model for human engagement. You know. Shoot the moon. ;)

Networks of Care and Vulnerability

This Thursday – November 6th at 1:30pm – I’m a guest in George Veletsianos’ #scholar14 open course, talking about networks as places of care and vulnerability. It’s a Google hangout, so the talk will be an informal back and forth, open (I hope?) to multiple voices if folks want to join in.

It may even be a little bit fraught, as George may have had a different concept of vulnerability in mind when he first suggested the topic. He frames vulnerability in terms of sharing struggles, which I’ll definitely talk about on Thursday; my online origins lie deep in the heart of that territory. But, the juxtaposition of care and vulnerability, as a topic, was rich enough it that it helped me grapple with some of the complexities I was trying to frame from my research study, and I took up vulnerability more through a lens of risks and costs. As I am wont to do, I ran with that lens, and ended up not only with the presentation below (liveslides from Alec Couros and Katia Hildebrandt’s EC&I831 class last month) but with half a research paper under that working title for my ongoing dissertation project. So. Yay for networks.

Join us Thursday for the fisticuffs over sharing v. risk. Or something like that. ;)

More seriously, I may have ended up in a somewhat different place than George envisioned, but it’s a place I think needs to be visited and explored.

The Risks and Costs of Networked Participation
I just spent a week almost entirely offline, for the first time in…oh…about a decade. Not an intended internet sabbatical, but a side effect of extended theme park adventuring with small children and a phone that turns into a brick when I cross the US border. Y’all were spared an excess of gratuitous commentary on the great American simulacra that is Disney, basically. You’re welcome.

Being disconnected from my network was kind of refreshing. No work, no ambient curation, no framing and self-presentation for a medium with infinite, searchable memory.

It didn’t mean I was magically present the whole time with my darling offspring: I remain a distractible human who sometimes needs to retreat to her own thoughts, online or off. Nor did it mean I missed out entirely on the surge of painful yet necessary public discussion of sexual violence, consent, and cultures of abuse and silence that bloomed in the wake of Canada’s Craziest News Week EVER. Still. Sometimes a dead phone is a handy way to cope with the overload and overwhelm of networked life, especially for those who both consume and contribute to the swirl of media in which we swim.

Because contributing and participating, out in the open – having opinions and ideas in public – has costs.
***

Participation makes us visible to others who may not know us, and makes our opinions and perspectives visible to those who may know *us,* but have never had to grapple with taking our opinions or positions seriously (oh hai, FB feeds and comments sections hijacked by various versions of #notallmen, #notallwhitewomen, and #notalltenuredscholars).

Participation enrols us in a media machine that is always and already out of our control; an attention economy that increasingly takes complex identities and reduces them to sound bites and black & white alignments.

The costs are cumulative. And they need to be talked about, by those of us who talk about networks in education and in scholarship and in research. Because in open networks, a networked identity is the price of admission. The costs are what one pays to play. But they are paid at the identity level, and they are not evenly distributed by race, gender, class, orientation, or any other identity marker. And so with participation comes differential risks. This matters.

Bud Hunt pointed out in a (paywalled but worthwhile) Educating Modern Learners article this morning that October was Connected Educators Month…and also Gamergate. Two sides of the participatory coin. Audrey Watters doubled down on that disconnect this afternoon in Hybrid Pedagogy, riffing on Dylan’s Maggie’s Farm and asking edtech to take a good, hard look at what we ask of students when we ask them to work online:

“And I think you need to think about your own work. Where you work. For whom.

And then you must consider where you demand your students work. For whom they work. Who profits. Where that content, where that data, where those dimes flow.”
– Audrey Watters, 2014

So. This post comes, like Bud and Audrey’s pieces, from a growing dismay and uneasiness with what’s happening at the intersection of technologies and capital and education; a growing belief that the risks and costs of networked identity are an ethical issue educators and researchers need to own and explore. It comes from looking through my research data for what Audrey calls “old hierachies hard-coded onto new ones.”
***
Attending to Each Other in the Attention Economy
But it also comes from the sense that there is more; that the ties created even in the most abject, hierarchical, surveilled online spaces tend, like good cyborg entities, to exceed their origins.

It comes not just from the formal research data collected over months of ethnographic observation and conversation, but also from some deep and powerful conversations that the research process created.

I didn’t know Kate Bowles especially well when I put out the call for participants in my dissertation project a year ago today. She didn’t know she had breast cancer when she agreed to participate. Somewhere along the road of the past year, our discussions of identity and networks and academia and self and life sometimes got beautifully tangled, as ideas actually do, freed from eureka-moment idealizations of authorship. And somewhere in the middle of one of those tangles, she reminded me that my sometimes grim vision of the attention economy is not the only way to conceive of attention at all; that its origins come from stretching towards and caring for each other.

“the attention economy…isn’t just about clicks and eyeballs, but also about the ways in which we selectively tend towards each other, and tend each other’s thoughts–it’s an economy of care, not just a map to markets.”
– Kate Bowles, 2014

I don’t know what to make of all that…but there’s hope in it that I’m not willing to abandon just yet. When I think about networked scholarship right now, it’s in terms of these contradictions of care and vulnerability, all writ large in the attention economies of our worst and better angels.

Maybe on Thursday, in the #scholar14 hangout, we’ll figure it out together and I’ll know how my paper should end. ;)

what counts as academic influence online?

Sometimes things shift when you’re not looking.

I woke up last Monday morning to discover that practically every Chronicle link on my Twitter feed related to my research area. Not in any elbowing-in-on-territory kind of way, but rather in a “whoa…serious synergy here” fashion.

Sometimes, when I get up in front of fellow educators and academics and say I study scholarship and…Twitter, I end up feeling like I’m doing stand-up comedy. Really? Twitter? say people’s eyebrows. I am becoming a great student of arched eyebrows.

Yet on Monday, casual academic readers of The Chronicle – and their eyebrows – would’ve been hard-pressed not to come away with the impression that academic identities in social media are actually Something To Care About, as a profession.

(Naturally, this will have backlash. People’s eyebrows generally do not LIKE to be beaten about the head with the idea they should care about something just because suddenly it’s the Flavour of the Month. Nor should they. I feel you, eyebrows of the world).

Still, the sense of critical mass is energizing to me. The work of research that is not legible to others always feels, rhetorically, like lifting stones uphill: constantly establishing premises rather than moving on to the deep exploration of that one particular thing.

The more the conversation about networks and identities and academia grows and pervades people’s consciousness, the less of that Sisyphean phase of the lifting I need to do.

Because this is not a Flavour of the Month, folks. This is a cultural shift, one part of the sea change in contemporary higher ed.

Dear arched eyebrows: this doesn’t mean you have to use Twitter. Or any other social networking platforms. Nor do you need to get personal online if you don’t wanna. But your concepts of academic identity and academic reputation do need to expand. Twitter and social media are now a part of scholarship, as modes of communication and of scholarly practice. So if I tell you I’m exploring the part they now play in academic influence…try not to arch so hard you hurt yourself.
***

I had the privilege of giving a keynote at the University of Edinburgh’s E-Learning conference two weeks ago now, hot on the heels of the very good time my mother and I had at #nlc14 there this year. If you missed us, we were the really excited Canadians swooning at all the Scots accents. ;)

The theme of Edinburgh’s E-learning conference this year was authenticity…a word that makes me a little wary. Authenticity matters. But authenticity can also be a weapon wielded to defend the “real” (read: the non-digital, or the traditional, or the tidily, smarmily Hallmark-branded) against whatever binary or straw man it chooses.

So I talked about networked scholarship, exploring the question of what counts as authentic academic influence now.

Basically, it coulda been subtitled “How Do Scholars Use Networks And What Does That MEAN?” or…”WTF Is A Graduate Student From Another Country Doing Giving A Keynote, And How Did This Happen?”

In the talk, I outlined some of the preliminary findings from my research these past few months, including what scholars seem to use networks for and what kinds of patterns emerged from the tweets and RTs that flew through my timeline this past winter. The slideshow above gives a taste of some of the tweets I flagged during the last few months of research (note: not all are from my participants). But what I really talked about was influence.

The Math of Influence
Influence is a complex, messy, slightly socially-discomfiting catch-all equation for how people determine the reputation and credibility and essentially the status of a scholar. There are two ways influence tends to get assessed, in scholarship: there’s the teensy little group of people who actually understand what your work really means…and then there’s everybody else, from different fields, who piece together the picture from external signals: what journals you publish in, what school you went to, your citation count, your h-index, your last grant. It’s credibility math, gatekeeping math. It’s founded in names and organizations people recognize and trust, with a running caveat of Your Mileage May Vary.

And now, in the mix, there’s Twitter. And blogs.

How can something that the general population is convinced is about what people had for lunch be a factor in changing what counts as academic influence?

Here’s how.

Beyond Gatekeeping: Networked Influence Signals
Going online and talking to people you don’t know about areas of shared scholarly interest opens up your reach and reputation for what you do. It opens up your capacity to build communities of practice around shared interests. It opens up the possibility that when people in your field – the people reviewing your panel or on your next granting committee – hear your name, it will be one of those they already recognize and trust. Maybe. There’s a LOT of Your Mileage May Vary here.

Think of a Venn diagram – here’s how scholars traditionally share their work, here’s what people had for lunch – and in the middle there are scholarly ideas ON social media. What I’m trying to do in my research is to identify the implicit literacies involved in making sense of identities and reputations and credibility in this intersection. Because so long as senior scholars and administrators and tenure committees think Twitter is what people had for lunch, there’s a gap in our understanding of influence signals, especially in fields that are changing rapidly.

I’m finding patterns and commonalities in how scholars use Twitter, and the things they express there. In the slideshow above, you’ll see that the touted “it increases your dissemination!” factor is important in shaping scholars’ practices, but for many that’s reported more as a side effect than a reason in itself. Community and connection and space to address marginalities on many fronts factor more powerfully in participants’ accounts of their networked practices, particularly for those who use Twitter for more than broadcast purposes.

At the same time, networked participation and networked connections and their non-institutional logics also bring more fraught elements overtly into play in the influence equation.

Enter Capitalism
Now, let’s not pretend that academic institutions are not capitalist institutions. They are, and increasingly so: capital equations of scarcity and commodity are very much a part of the institutionalized and gatekept versions of academic influence signals that have gained traction over recent generations. But the individual scholar in these equations is, except in superstar instances, an institutional role rather than an identity unto him or herself. In networks, individual identity operates as a brand, particularly as the scale of attention on an individual grows.

This allows junior scholars and adjuncts and grad students and otherwise institutionally-marginalized identities to build voices and audiences even without institutional status or sanction. It allows people to join the conversation about what’s happening in their field or in higher ed in general; to make contributions for which channels do not exist at the local level. Networked platforms act as hosts for public resistance to the irreconcilable contradictions of contemporary academia, as well as society more broadly. But networked platforms are still corporate platforms, and should not be seen as neutral identity playgrounds. As Tressie MacMillan Cottom and Robert Reece ask in this sharp piece on hashtags and media visibility, “how radical can your resistance be when it both funds a corporation and is subject to the decisions of that same corporation?”

Power in Networks
Being visible in networks *can* create access to visibility and voice in broadcast media, which sometimes lends perceived credibility to the way a scholar’s work is taken up…or at least amplifies his or her name recognition. The power relations of scale are complex, though: the racism and sexism and heterosexism and able-ism and Anglo-centrism of our contemporary world are in many ways replicated in the ways voices get heard, online, and the backlash for women and people of colour who dare to speak can be vicious. The constant identity positioning and lack of transparency and understanding about how visibility works can also make the world of academic Twitter into mean streets, sometimes.

The biggest factor in building influence in networks – one that should assuage some of the arched eyebrows – is that it tends to take, like all scholarship, a great deal of time and work. Twitter is not a magical path to fame, or to celebrity academic status. In fact, on its own, it’s created few superstars: the traditional, institutional halls of power and high status still do far more to thrust scholars into influential circles of attention and public regard. Noam Chomsky’s speaking fees are not especially under threat from Twitter upstarts, and Twitter and blogging alone do not often result in New York Times gigs. But they are, now, indubitably a part of that picture, in ever-expanding circles.

I see the networked version of academic influence as what Audrey Watters calls “a cyborg tactic:” the illegitimate offspring of complex totalizing equations, and yet potentially subversive to them. This potential lies, as Haraway would put it, in the fact that illegitimate offspring are often “exceedingly unfaithful to their origins.” As a development in how scholars understand each others’ signals of credibility and reputation, networked influence is neither good nor bad, and certainly not neutral. But it is, and it is important to try to understand.

And to those who would raise their eyebrows at this assertion, I say: sometimes, folks, things shift when you’re not looking.

god bless us, every one

“I have endeavoured in this Ghostly little book, to raise the Ghost of an Idea, which shall not put my readers out of humour with themselves, with each other, with the season, or with me. May it haunt their houses pleasantly, and no one wish to lay it.”

Their faithful Friend and Servant, C.D.
December, 1843.
(from the Preface to A Christmas Carol, original edition)

It’s all Dickens in our house, these days. I have a seven year old playing the part of Tiny Tim in the city’s production of A Christmas Carol: he’s rehearsing twenty or so hours a week and learning to sing notes no voice related to my own should ever decently attempt. Dave and I ferry him to and fro and discuss Victorian concepts of charity and debate the merits of his various fake English accents. We’ve also introduced his younger sister to the story via The Muppets so she doesn’t bolt in terror from our front-row-centre seats at the matinee when we take her to see him.

https://www.flickr.com/photos/ofsmallthings/8288450497

The movie-watching unfolded something like this:
five year old: Scrooge is bad!
me: Scrooge has made the mistake of thinking money is the only important thing in life.
five year old: Why does Scrooge want everybody to work on Christmas?
me: He can’t imagine anything else useful besides working, honey.
five year old: Scrooge leaves the little bunny in the cold!
me: Yes. At first he does, because he believes nobody else deserves anything of his.
five year old: Scrooge needs to learn to share!
me: Well, yes. And he does, right? He doesn’t want to live a life where nobody remembers or cares about him. So he opens his heart.

In the midst of this heartwarming tale of greed and redemption, a chill of doubt and fear struck me, and a cynical sub-narrative ran through my responses. Am I setting my children up for cruel disappointment by letting them believe in…Scrooge?
me (muttering): Power doesn’t seem to be as lonely these days as it was for the Victorians.
five year old: What?
me: Nothing.
five year old: Scrooge is sad because people say bad things about him when he’s not there.
me: Maybe the 1% should read what Twitter has to say about THEM.
five year old: What?
me: Nothing. Sorry. I was just thinking we still have some Scrooges in the world.
five year old: Why does Scrooge leave the poor bunny in the cold and throw things at him, Mommy?
me: Scrooge likes to believe that the people who don’t have what he has don’t deserve it. This is a mistake lots of people make, sweetheart. You should read the comments in The Chronicle of Higher Education sometime.
***

By mid-Victorian standards, the unredeemed Scrooge may have been a terrible, isolated cad. By the measure of the moment, his hearty embrace of a second chance at humanity seems to make him a less likely figure than Santa Claus.

Can I really raise my kids to expect that all it takes is a couple of ghosts to rid a heart of avarice and derision? Scrooge’s early outlook on the world was written as a scathing indictment of unchecked industrial-era capitalism, but he says little worse than can be found in any clot of online comments any given week…and not in the underbelly of Reddit, but in ye olde academic blogosphere.

Are there no prisons? the usual suspects snipe to the precariat who have not achieved tenure.
Are there no workhouses? they sneer at all who dared specialize in disciplines that aren’t, effectively, economic engines of their own.

When the ghosts of Christmas past arrive to point out that many struggling scholars chose their disciplines some time ago, as part of very different economic and cultural narratives? It doesn’t seem to register. Even when a Tiny Tim is held up, the first in his or her family to ever GO to college? Deaf ears. As one of these Tiny Tims who chose the field of education out of the best of intentions 20 years ago at 21, the year before the teaching market collapsed here and all the teachers stopped retiring, let me tell you: in a lot of families, just going to SCHOOL is a big, foreign, intimidating thing. When no one in your life can explain the difference between sociology and neuroscience and everyone you know just works at whatever job they can get, the concept of choosing a field based on return on investment isn’t even on the radar. Yet kids are just supposed to KNOW. Perhaps if the commenters spent their surplus hours consulting in local high schools rather than soapboxing on the internet, they could help save future generations of bright deserving youth. But let me tell you, even neuroscience ain’t a ticket to Easy Street these days, Mr. Scrooge, sir.

And when the ghosts of Christmas future intone that the tenture track is dwindling and in fact that higher ed would currently run aground in 20 minutes if all who teach within its hallowed halls were offered job security and a living wage? More selective hearing. The deserving will make it, runs the Victorian logic of parsimonious “charity” that only extends its warmth to those it recognizes as kith and kin, fellow winners in an increasingly stacked and unsustainable game.

(This is all to say nothing of the larger excesses and abuses of global post-industrial capital, of course, before anyone jumps in with that particular rhetorical parry. Western society’s most educated are hardly a sympathetic lot compared with those who mine the raw materials for our smart phones or who labour in condemned buildings to make the clothes we wear. Or those without the privilege of education in our own cities and towns. Fully agreed, full stop. That does not mean the increasingly disparate field of our own industry and agency is undeserving of regard.)

Secure or precarious, we are all tied like Scrooge to our desks these days, trying to fit more and more work and possibility into the same old 24 hours. If you have a reasonable job in academia after studying for half a lifetime? Please expect to work increasingly long hours on the treadmill for the privilege of believing you have not been left behind. If you don’t? Better bust your hump and distinguish yourself ever further, ever higher. And if the ghost of Christmas present dares show his jolly face and suggest you leave your toil for leisure?

The academy – and the rest of post-industrial capitalism – suggests you simply make leisure of your toil. We work on ourselves and our careers and our merged personal/professional identities, here in these convenient online spaces, around the clock.

We none of us have time for redemption, these days.
***
This all hits close to home because it is what I research. And I research it in the stuffed gaps between kids’ rehearsals and laundry and writing and presentations and sleep, like a proper 21st century Scrooge scholar. And then occasionally I have it reflected back to me from a perspective that turns it all on its head and I feel as if I am standing in a Victorian street in my nightshirt and bedcap, peering in at a scene and pleading, “No, spirit! No!”

A week ago last night I sent out notifications – invitations, thank yous, regretful ‘no’s to the generous people who came forward to volunteer for my upcoming dissertation research. I had a particular bounty of women from Australia, mostly white, mostly mid-career, and so one of the people I said yes to was almost a no until I plotted out my demographics differently and realized I had a gap that she might be able to speak to. I’d heard her voice for the first time ever just the week before, live from Australia in a fabulous late-night riff after my #wweopen13 live session ended. Kate.

She said yes, she’d be in my research. And then she dropped a little bombshell, gently, as you do when you are new to standing in the space where your audience’s jaw goes slack. She said “I’ve just been diagnosed with breast cancer. Just last week. The day after we talked.”

Well then.

Yesterday morning, I woke up to a post she’d published as what can really only be termed a wake-up call. She said, “You don’t have my consent to use my remaining time in this way. What do we do about the way in which overwork is the price that is now demanded for participating at all?” And then, “Hope is the alibi for inaction: what we need is the courage to put work itself at risk.”

Well then.

And I don’t know. Nor do you, likely. But a Christmas Carol played a major part in creating the public, political will to temper the excesses of industrial capital. I’d like to imagine Kate’s words could be the scathing indictment of post-industrial academia that we all need in order to reframe the pretend volunteerism that underpins so much of what keeps institutions going these days, without any real promise of reward or belonging in the mix. Perhaps we need this kind of story in order to be able to see the grotesqueries of our own culture, the spectres of our fear and our cultivated insecurity. Perhaps if we can see and own them, there is at least a chance of mitigating them.

But do not misunderstand. As Kate makes clear in the post, her diagnosis impels her and frees her to speak, but it does not make her different from any of the rest of us: “…it doesn’t make me differently mortal than anyone else.  We are neither vampires nor zombies, whatever the craze for playing with these ideas: we are humans, and we are all here together for a very short time, historically speaking. And so that being the case, the question facing us all is this: what do we do about work?”

That’s the thing. Kate is not Tiny Tim: we all are. And we are our own Scrooges, too, trapped in habits that will not magically change overnight, no matter the ghosts that visit. But the spectre of our own humanity and mortality needs to be one we all begin to pay attention to, and speak for. With courage, not just hope.

MOOCs are Not the Enemy. Sorta.

So. I stood up in front of a whole room of academics and theorists and grad students with funky glasses this weekend and said the word “MOOC.” And nobody threw a single tomato, which surprised me.

My presentation for Theorizing the Web 13 at CUNY was entitled “MOOCs are Not the Enemy: Networked, Non-Imperialist MOOC models.” Or in simplest terms, “cMOOC is for cyborg.” Ahem.

The Cliff Notes version:
My base premises are these: privatization is bad and colonialism is bad and globalization is as shady as it’s always been and there are lots of totalizing systems at work in higher ed these days, old and new. But talking about these things through the lens of MOOCs increasingly seems to devolve into binary arguments against one totality while half-defending another, until it feels like the proverb about the seven old blind men and the elephant. A MOOC is a snake! cries the one holding the tail. No! It’s a sail! shouts the one with the ear in hand.

More Than is Dreamt of In Your Philosophy, Horatio
Both the elephant and the MOOC defy simple metaphors, because they’re huge. MOOCs make visible the intersection of a snarl of complicated axes of change and power relations in higher ed, so reifying them into a single axis – even if it’s the dominant one – leaves too much of the picture out. A MOOC is a course that is massive and open and online in some way and beyond that, for the moment, I’m agnostic.

Not because I’m not aligned: I am aligned. But because I think the conversation is too important to foreclose. There are a host of valid criticisms of MOOCs of all kinds, even the ones I really enjoy, and I want to be having those conversations and talking about the forces driving different MOOC models and driving change in higher ed. A lot of these forces scare the shit out of me, for the record. But I think – as I’ve heard other people say (I’d thought it was Cathy Davidson but I can’t seem to find a link) – that MOOCs are a symptom of these forces rather than the problem in and of themselves.

So dismissing MOOCs outright, or insisting on talking about all MOOCs as if they were one hegemonic thing rather than a still new and shifting collection of phenomena, shuts down the possibility of doing something more with them.

It gives the conversation over. I’m not ready to do that. I don’t want to give over – yet, at least – to the idea that anything about MOOCs is inevitable.

Beyond the Borg Complex
To be sure, we can’t be in higher ed today without being to some extent subject to the changes being wrought by privatization and globalization and the undermining of the narrative of public ed and the public good. These logics constrain budgets, shape policy, affect how what we do is taken up and the roles available to us.

The most dominant MOOC models embody a lot of these forces and logics. So they inspire vitriolic response: we don’t  want to be the kind of subjects they seem to impose on us.

Or some of us don’t. In the ongoing Shirky/Bady back & forth about which end of the elephant is more equal than others, Bady pegs Shirky’s “it’s happening anyway, might as well adapt” response as a form of what Sacasas calls the Borg Complex, a determinist “resistance is futile” fatalism combined with a neoliberal identity approach.

But that conversation is still a binary. And leaves Bady to some extent defending the traditions of that other totalizing system, the conventional patriarchal and elitist mythology of “schooling” that many open online educational efforts exist to challenge.

I end up nodding hopelessly at the beautiful prose of the both of them and thinking about narrative escalation in pre-World War I Europe. With all this grandiose buildup, the Triple MOOC Entente and the Triple MOOC Alliance carve out increasingly opposed territories until I wonder if Archduke Ferdinand’s been shot yet and the bloody inevitability can just start, already.

Or we could explore MOOCs from a cyborg perspective.

A cyborg is not Borg
The Borg is an all-swallowing collective that cannot be resisted, a totalizing force.

Haraway‘s cyborg, on the other hand, is what might be termed a networked individual, illegitimate offspring of what Haraway calls the “informatics of domination,” but still subversive to the very forces that created her. S/he is an ironic hybrid of human and technology who breaks down binaries that otherwise seem naturalized and totalizing. The cyborg recognizes in technologies the possibility of “great human satisfaction, as well as a matrix of complex dominations. Cyborg imagery can suggest a way out of the maze of dualisms in which we have explained our bodies and our tools to ourselves.” (1991) The cyborg is complicit, a part of this digital world. But s/he is never entirely subject to its terms: s/he is not without agency.

The cMOOC as cyborg
So on the plane down to Theorizing the Web, as I finalized my slides, I decided that the first c in cMOOC stands for cyborg.

(I mean, I know it *actually* stands for connectivist. That’s as it should be. MOOCs were founded on the connectivist principles that knowledge is distributed and generative, and I think for MOOCs to actually capitalize in any sense on the affordances of digital technologies and not merely transfer traditional approaches to learning into the online space, those two concepts are important lodestars. And the original MOOC was built not only on George Siemens‘ and Stephen Downes‘ work developing connectivism but was actually a course ON connectivism and connected knowledge: the cMOOC model is connectivism incarnate.)

Because I’ve had the (sometimes admittedly discombobulating) pleasure of working with and in and around this grassroots model of MOOC for a few years now, I have a vantage point that many of MOOCs’ detractors don’t: I have lived experience of a model of MOOC that isn’t corporate, or colonial, or – most importantly – totalizing. And I think cMOOCs and other networked online learning opportunities and efforts that attempt to destabilize some of the institutional or corporate or globalizing tendencies that dominate much of the MOOC conversation (and many MOOCs themselves) may offer a cyborg approach to massive, open, online learning: it may offer a model of subversion.

cMOOCs, even as cyborg, are neither a perfect model or a panacea for all the challenges higher education faces. But  they emphasize participatory, networked, distributed approaches to learning that challenge and subvert many of our inherited cultural concepts of schooling. They encourage learners to generate knowledge, in addition to simply mastering it. They are a way to re-vision the conversation in terms that neither deny the possibilities of technology and networks nor give over entirely to the logics and informatics of domination.

They are MOOCs that undermine some of what MOOCs seems to be coming to mean, and in that, I think there is both power and potential.

***
current/ongoing/historical cMOOCs & their open/online/hybrid kin:
(including even a Coursera course that tries very hard to subvert its own conditions of production)

#etmooc (Educational Technologies MOOC – ongoing and amazing, just entering topic 4: check it & join in)
#moocmooc archives (two separate week-long MOOCs on MOOCs)
#ds106 (not a MOOC, but an ongoing, open, public course in digital storytelling via University of Mary Washington)
@dukesurprise (a for-credit Duke course with an open, public component)
#inq13 (a POOC or Participatory Open Online Course through CUNY on inequalities, with an East Harlem focus)
#edcmooc (a Coursera course in Elearning & Digital Cultures offered by University of Edinburgh that runs more like a cMOOC)
The MOOC Guide – Stephen Downes’ master resource of most cMOOC-ish offerings from the beginning
#change11 archive (the mother of all cMOOCs: 35 facilitators each took a week to explore change in higher ed)

There are lots more, I’m sure – happy to add if people want to send examples.