if Foucault ran a MOOC

In the mad hype these days about MOOCs – which are Massive Open Online Courses, for those of you either not in higher ed or currently aboard the Mars rover – I find myself that dreary voice in the back of the class repeatedly piping up Hey dudes. MOOCs are not just whatever you decided they were when you encountered the word yesterday. MOOCs have a history.

I know. It probably comes off horribly. We in North America tend to be rather ahistorical, these days.

But when you are tied to a thing and its history and then that thing blows up in scale and the silly word that was coined in your living room is in the NYT and the thing you researched two years ago is everybody else’s New Big Thing and it means utterly divergent things to everybody and yet you’re all in the same messy conversation and nobody’s really sure what’s happening, well, you can only speak from where you are.

Which, in my case with MOOCs, is history.

(Or at least historical. The jury is still out on the rest.)

It’s like being one of those annoying groupies who was around before a band got huge and sold out to The Man and I find myself chirping, I knew MOOC back in the day, you know. When he was Authentic.

Yeh. Shaddup already. I got it.

That is not the history of MOOCs I want to talk about. The one where they started in Canada and are based in connectivist principles that model the operations of the internet, and blah blah blah. I have a different story for you.

It’s about what MOOCs – all of ’em, but maybe especially the xMOOCs and Coursera and all the big ones tied to the elite institutions; the ones currently leaping on board and those about to leap – have to learn from Foucault.

I know. Foucault’s star in academia has been eclipsed of late by the resurgence of focus on all things quantifiable and measurable and ostensibly efficient, but I want to wander, at least briefly, down the road of assuming there’s value in stepping beyond the New & Improved! TM sales mentality that seems to accompany our collective contemporary approach to all things educational.

Because here’s the thing. Most of what we’re on the precipice of exploring in higher ed with MOOCs is not actually new.

Foucault, for instance, had a MOOC in 1970. Or at least a MOC.

For the last fifteen or so years of his career, Foucault taught a massive, open course. Every year. That was one of the terms of his chair at the College de France. He was everything MOOC except online.

The politics of France in the late ’60s when Foucault’s chair was decided upon were such that it was perceived as elitist and unthinkable to keep Foucault, one of the country’s great philosophical treasures, from the people. Thus his seminars and courses were free. They ran from January to March each year. The public had the right to attend: two lecture halls were generally filled for every address he gave.

As technology became available, the lectures were recorded. These have since been compiled and sold as books.

Of course, travel and books are NOT actually free. And presumably there were many in France at the time who – due to constraints of money or time or distance – were nonetheless unable to access their national philosophical treasure, no matter their right.

Still, enough people showed up that it was, apparently, something of a pain in the ass for Foucault. Not because he didn’t want to talk to lots of people, but because talking at lots of people is not the same as talking to them.

He opened his 1983 lecture series – his second-last – on January 5th of that year by stating that “it is often rather difficult giving a series of lectures like this without the possibility of comebacks or discussion, and not knowing whether what one is saying finds an echo in those who are working on a thesis or a master’s degree, whether it provides them with possibilities for reflection and work” (Government of Self and Others, page 1).

He goes on, then, to acknowledge that “in this institution, where the rules are very liberal, we cannot give closed seminars, reserved for just a few auditors…All the same, what I would like, not so much for you but selfishly for myself, is to be able to meet. Off-Broadway, outside of the lectures, with those of you who could possibly discuss the subjects I will be talking about this year, or that I have talked about elsewhere and previously” (Government of Self and Others, page 1).

The thing about histories is that they help us understand what is NOT new about what *seems* new so we can understand:
a) what actually IS new.
b) what is valuable in the difference.

From Foucault’s MOC experience, it becomes clear that the idea of massive and open courses isn’t particularly new at all, though it is rather foreign within the North American academic tradition. The key differentiating aspect of MOOCs, then, is that they’re online.

Being online means they can spread and scale and disseminate knowledge incredibly widely, sure. But 2000 or 200,000 people only really begin to take up the online potential of MOOCs when they connect and network. When they go, as Foucault put it, Off-Broadway to discuss and participate: when the possibility of comebacks between professor and student becomes a reality.

This is the piece that I hope the various institutions currently grappling with the question and challenge of MOOCs take to heart: just using the internet to open another giant free lecture hall? Does not a new learning opportunity make. If MIT and Stanford and the lot are doing it out of their deeply socialist commitment to all citizens having the same access to their learned luminaries, well then, the College de France model may suffice.

But it didn’t suffice for Foucault, in terms of his own growth as a thinker and a scholar.

Now, his whole chair existed for the purpose of those lectures. His funding ensured that he had the time to follow those inclinations to set up an Off-Broadway discussion group with interested learners in the lecture herd.

Do the MOOCs that extend the brand of elite institutions enable and support their faculty in engaging with learners, in making MOOCs more than simply MOCs or massive one-sided conversations, however edifying?

Because that’s what the history of massive course delivery suggests is valuable. And that’s something that the historical MOOCs – the smaller, non-institutional Canadian versions that pioneered the term – were built on: the capacity of the Internet to connect people, in networks.

I suspect if Foucault ran a MOOC today – whether xMOOC or connectivist MOOC or any other model yet to emerge – that’s what he’d be advocating.

Listen up, higher ed.

 

education, learning, and technology – #change11

One of the most interesting things I’m doing this year – learning-wise, research-wise, and community-wise – is the Change MOOC.

(For those of you not already signed on for this adventure, dimestore recap: a MOOC is a Massive Open Online Course. It’s free. Anyone can register and participate. There are set topics, assignments, and timelines, but you do what you want, via blog or FB or the central discussion threads: in terms of both your contributions and the platforms you use to share them, its entirely your choice. There are no gold stars – as yet – or credentials for completion, and no invalid forms of participation so long as you’re respectful of others. It’s a chance, basically, to be part of some coordinated conversations about learning, or whatever the topic, and to make some connections amidst the morass of people IN that conversation. A MOOC usually has participants from around the globe, at least within the English-speaking world. There are close to 2000 people registered for this one, I think.)

MOOCs are catching on these days: Stanford is even offering one. What makes the Change MOOC particularly intriguing is that while Stephen Downes, George Siemens, & Dave Cormier (yep, that Dave) – the godfathers of MOOC – will manage this course, they’re not doing the lion’s share of the facilitation this time.

The course is 36 weeks long. Each week has its own focus, under the overarching umbrella theme of “Change: Education, Learning, and Technology,” or how being connected changes learning. Which, as I see it, is a key site of contemporary cultural shift whether you come at it from the perspective of an educator, a geek, or simply a connected person interested in understanding social media practices more explicitly. Each week is facilitated – readings suggested, an online discussion session hosted – by a researcher or innovator or leader in the particular area being explored: this week, it so happens, is the theme of Digital Scholarship, led by Open University academic and author Martin Weller. Participants are encouraged to “write themselves into the course” by responding to topics, themes and assignments in whatever way they wish. In MOOCs, participants’ input often drives as much or more of the trajectory of discussion and interaction than does the facilitator. It’s networked, distributed learning.

I’ll be facilitating a week on social media identity come spring. I get to take my fledgling research into a classroom-ish setting and explore it with and through the participation of others, many of whom likely will have social media identities to bring to the table. (I’m the second-last week, mind you, so the 2000+ may either have dwindled to 12 by then or blown up to even more gargantuan proportions. We’ll see.)

But even better than getting to teach, I get to participate in the whole shebang. Student, faculty, and researcher all in one. Most of the 36 facilitators are also participants and researchers. Identity-wise, this levelling of hierarchical role separations obviously interests me. But so does the rest of the content.

This week’s Digital Scholarship discussion is particularly interesting. What does the capacity to share ideas outside traditional academic channels mean for scholarship in the 21st century? What will the impact of it be? It’s the impact piece – and the implications for traditional practices – that intrigues me. How do connectivity and the capacity for digital sociality suggest transformations in academia?

One of the readings Martin suggested for the week was the JISC (UK) report (2009) on the Lives and Technologies of Early Career Researchers: as an early career researcher, just starting on my second year of Ph.D studies, I’ve been conducting my own informal experiment over the past year into the subject.

Am I getting more input and feedback into my research and learning via traditional academic channels, or online?

It’s an unfair question, in a sense, because of scale. I’m in a tiny program, the first Ph.D in Education in this province. We are a cohort of three, with three additional students starting up this month. While there are a few broad overlaps in subject area, my peers and I share very little in the way of common focus, experience, practice, or expectation. So the level of face-to-face peer input into my research thus far – lovely and supportive though my colleagues are – has been seriously minimal. Tiny. Whereas from the time I started this blog back in January, the combination of my large-ish online community and my research interest in online identities and practices has made a wealth of sharing and input and feedback available to me, in spite of the fact that the majority of that online community are not in any way academics. They are, however, engaged in the culture of blogging and social media, which encourages reciprocality. Academia is generally still disposed and structured to be wary of reciprocality: it comes too close to plagiarism and treads on cherished Enlightenment notions of individual intellectual enterprise.

However, if I only had an online community of three, I might not have had the same experience. But it is very very hard to have an online community of three. The scale of connectivity in digital spaces and the potential for productive sharing, collaboration, and congruence therein is one of the biggest arguments, in my mind, for digital scholarship. Or at least, the digital engagement of scholars. Which usually ends up being digital scholarship, because people engage on topics that interest them.

As Martin Weller points out, though, there’s a conflict here. Research, in most traditional academic conceptions, relies on concepts of control, even where replicability is not required. New technologies are, from an institutional practice point of view, about letting go of control: giving it up to the crowd. And if academia lets go of its controls, of course, how does it validate knowledge? How does it verify and justify its own structures and practices? Yes, connectivity distributes research ideas far more quickly and broadly than traditional journals. At the same time, yes, crowdsourcing is (perhaps) a more vulnerable system of verification than peer review, Nature magazine’s 2006 experiment notwithstanding.

(Sidenote: have been reading Deleuze on Foucault lately. Foucault spoke about the institutional structures of the 19th century as the structures of a disciplinary society, juxtaposed against what he, Burroughs, & Deleuze called societies of control, in which continuous modulation of behaviours shaped by business principles along digital (or spectrum-based, non-analogical) models occurs. Control devolves from confinement by institution to a self-colonizing practice taken up by the crowd, by the individuals within. I think this has – ahem – some resonance for those of us interested in higher education and MOOCs, I’ll explore it soon, in another post).

The biggest best thing about the MOOC is that it’s a semi-structured opportunity to teach yourself to BE a digital scholar, whether in or out of the academy. To select what’s relevant from the stream. To curate. To share. To work iteratively, publishing ideas that – like this post – aren’t all you could say on the subject, but are at least a start. And letting some stuff go so as not to be entirely colonized, perhaps.

All are welcome, and there are still 34 mind-boggling-packed weeks ahead. Not too late. Think on it. Change MOOC.

I’ll let Dave explain how simple it is, courtesy of our SSHRC research project last year.

a genealogy of digital identities

Grope. Stumble. Circle around.

I’m fumbling my way towards the methods & methodology choices that will guide my digital identity research. This week, for the first time, my blurry paths collided hard with current events in the world and the social media sphere.

Tom MacMaster, A Gay Girl in Damascus’ hoax blogger, has personally altered the direction of my dissertation’s methods section.

(Okay, well, him and Twitter. And the mainstream media attention his blog garnered even before he claimed Amina had been kidnapped. And the Orientalism and colonialism and exoticism that still inform how we in the West attend to narratives from the Other, seeing as I doubt somehow that it was a total coincidence that the single identity most Westerners could name from the whole Syrian uprising this spring turns out to be that of…a Westerner.)

I struggle with formalist categories like method. I recognize that they are, in a sense, intended to make things clearer, to parse the broad territory of social science and research and the multitudes therein. For someone like me, more inclined to gradations and overlaps than clear divisions, they confuse. I hover on the borders and boundaries, a millipede with feet in so many camps that headings like “Research Objectives” and “Data” make me feel hopelessly messy, mired in no-man’s-land.

This isn’t a bad thing, only a disorienting one. My work doesn’t fit tidily within the bounds of education alone, or of cyborg anthropology or any other discipline or corner. The straddling that I need to do between discourses and approaches and worldviews helps me unpack methods and methodologies and epistemologies, forces me to continually apply theory to theory in a roundabout kaleidoscope. Patti Lather’s work, which explores validity structures transgressive to traditional scientific methodologies and includes comforting titles like Getting Lost (2007), helps me feel better about the kaleidoscope. My goal, after all, is situated knowledge, rhizomatic knowledge with multiple openings. No one tidy method will ever take me on that kind of exploration.

Every journey has first steps. The two methods I’ve embarked on thus far are themselves straddlers, each bridging the blurry boundaries between methodology and method. One is the material-semiotic method that marks Actor-Network theory and the work of LaTour and Haraway and Karen Barad. The other is Foucault’s genealogy.

It is my understanding of the genealogy of digital identity that I’m going to have to revisit after this week.

Just a few days back, somebody asked the question that inevitably comes up whenever I mention genealogy and social media in the same breath: “How could there be social media subjectivity before social media?”

Sure, the platforms I’m working with date only from 2005 or so. But the shifts in the forms of identity performance privileged during that timespan have still been pretty heady. And digital identity scholarship was huge in the 90s. Haraway’s cyborg metaphor, which informs my own concept of social media subjectivity, is from 1985. The narrative forms and subjectivities that the blogosphere made into mass communications could be argued to have their origins in Montaigne. This rhizome has far older roots than appear on the surface.

Genealogy as a philosophical method isn’t much  different from genealogy as your great-aunt Louise’s favourite hobby: it’s an historically-focused endeavour that operates on the assumption that our present understandings – of self, of our place in the world, of anything – have precedents and ancestors.

In genealogy, delving into the questions of what or who these ancestors might have been and how they operated is an almost-never-ending, always-partial process of unpacking and tracing and exploring, aimed at re-presenting the present in a broader, more complex, and perhaps counter-intuitive light. Knowing you are a descendant of Marie Antoinette, even whilst you traipse the aisles of Walmart, may imbue you with a sense of grandeur, tragedy, entitlement, or irony, depending on your perspective.

Knowing the ancestors of our notions of who we are when we’re online, when we write ourselves into being, when we engage with each other through identities with visible metrics? I don’t know whether that will imbue us with any grandeur – I’m aiming more for irony – but I hope it will help situate the implications of social media subjectivities within stories and discourses more familiar to higher education, so I can then consider the overlaps and challenges facing academia in the near future.

But. But. One of the historical notions I believed I could refer to and then politely consign to the out-of-date heap came roaring back into play this week, with the furor over the Amina hoax.

The purportedly half-American half-Syrian lesbian passing herself off in interviews with The Guardian (the big one,  not the local PEI paper) as “the ultimate outsider” is, of course, actually MacMaster, a white male Master’s student living in Edinburgh.

What that says about white male fantasies of outsider status, the one thing privilege cannot offer, fascinates and entertains me. And affects my perspective on digital identity, because it revives a trope I thought I’d watched die.

In the 1990s, there was a lot of scholarly interest and attention paid to the idea of digital identity. Sherry Turkle and Neil Postman and a whole host of people did fascinating, exploratory work on the emerging digital culture and ideologies of technology and identity and the body in virtual worlds. One of the recurring themes in much of that work emphasized virtual identity and the possibilities of pseudonymous identity performance enabled by computers.

My favourite of these is the story of “Julie” from Allucquere Rosanne Stone’s The War of Desire & Technology at the Close of the Mechanical Age (1995). Julie was the extraordinarily successful and popular female persona of a male pyschiatrist in an early CompuServe chatroom. Like Amina’s, Julie’s was a marginalized female persona performed by a mainstream male: Julie claimed to be disfigured and disabled. In a narrative arc rather similar to that of Amina, Julie was ultimately outed by her own excess: while she claimed that her disability left her unable to interact offline with her chatroom community, she wove an increasingly complex narrative of offline antics. The stories created suspicion, and her embodied identity as Sanford Lewin was revealed. The gap between Lewin’s assigned identity and his virtual performance as Julie represented one of the major themes of digital identity scholarship in the ’90s: the possibility of being someone else online.

I thought this particular piece of digital identity ancestry had been rendered largely historical. When I began blogging in 2006, many of the bloggers I read – especially those who wrote about parenting and children – were still pseudonymous. Gradually, that shifted: the digital sociality that emerged out of that blogosphere community is an augmented reality, wherein people regularly meet in person and connect with each other across platforms, including Facebook, which tends to privilege and push towards disclosure of so-called “real” identity. Beyond that, the incursion of capital and sponsorship and the discourse of monetization all emphasized coming out as “oneself,” because a blogger named WineyMommy (names have been changed to protect the innocent) is arguably less likely to get picked up as a writer for the Huffington Post, say. Even if that only pays in reputation and opportunity.

My genealogy, though, will obviously need to consider how speaking the dominant discourse of power impacts reputation and opportunity, even for those purporting to be marginalized voices. It’ll need to reconsider whether even in the neoliberal “Me, Inc” augmented reality of social media, there’s room for performances of subjectivity that don’t match a person’s assigned gender or cultural identity.

Genealogy, as I understand, is about who can speak, and for whom, and to whom. Grope. Stumble. Circle back on myself and revisit. Thanks, Amina, for complexifying things. I’d hate for my methods section to get, uh, dull.
***

Have you ever had a pseudonymous identity online? If no, why not? If yes, to what extent did this persona line up with your own assigned identity?

Are you the same you across platforms (blogging, Twitter, FB, etc)? What factors affect your decisions about how to present yourself in social media spaces?