I’ve been using R for years and absolutely love it, warts and all, but it’s been hard to ignore some of the publicity the Julia language has been receiving. To put it succinctly, Julia promises both speed and intuitive use to meet contemporary data challenges. As soon as I started dabbling in it about six months ago I was sold. It’s a very nice language. After I had understood most of the language’s syntax, I found myself thinking “But can it do networks?” Continue reading
Sadly, we haven’t posted in a while. My own excuse is that I’ve been working a lot on a dissertation chapter. I’m presenting this work at the Young Scholars in Social Movements conference at Notre Dame at the beginning of May and have just finished a rather rough draft of that chapter. The abstract:
Scholars and policy makers recognize the need for better and timelier data about contentious collective action, both the peaceful protests that are understood as part of democracy and the violent events that are threats to it. News media provide the only consistent source of information available outside government intelligence agencies and are thus the focus of all scholarly efforts to improve collective action data. Human coding of news sources is time-consuming and thus can never be timely and is necessarily limited to a small number of sources, a small time interval, or a limited set of protest “issues” as captured by particular keywords. There have been a number of attempts to address this need through machine coding of electronic versions of news media, but approaches so far remain less than optimal. The goal of this paper is to outline the steps needed build, test and validate an open-source system for coding protest events from any electronically available news source using advances from natural language processing and machine learning. Such a system should have the effect of increasing the speed and reducing the labor costs associated with identifying and coding collective actions in news sources, thus increasing the timeliness of protest data and reducing biases due to excessive reliance on too few news sources. The system will also be open, available for replication, and extendable by future social movement researchers, and social and computational scientists.
You can find the chapter at SSRN.
This is very much a work still in progress. There are some tasks which I know immediately need to be done — improving evaluation for the closed-ended coding task, incorporating the open-ended coding, and clarifying the methods. From those of you that do event data work, I would love your feedback. Also if you can think of a witty, Googleable name for the system, I’d love to hear that too.
For my dissertation, I’ve been working on a way to generate new protest event data using principles from natural language processing and machine learning. In the process, I’ve been assessing other datasets to see how well they have captured protest events.
I’ve mused on before on assessing GDELT (currently under reorganized management) for protest events. One of the steps of doing this has been to compare it to the Dynamics of Collective Action dataset. The Dynamics of Collective Action dataset (here thereafter DoCA) is a remarkable undertaking, supervised by some leading names in social movements (Soule, McCarthy, Olzak, and McAdam), wherein their team handcoded 35 years of the New York Times for protest events. Each event record includes not only when and where the event took place (what GDELT includes), but over 90 other variables, including a qualitative description of the event, claims of the protesters, their target, the form of protest, and the groups initiating it.
Pam Oliver, Chaeyoon Lim, and I compared the two datasets by looking at a simple monthly time series of event counts and also did a qualitative comparison of a specific month.
Michael Corey asked me to post this CfP for a conference “Demography in the Digital Age,” occurring at Facebook the day before ASA (August 15). Note that this is the same day as the ASA Datathon, but if you’re a demographer this looks very cool.
On August 15th 2014, Facebook is sponsoring a conference on data collection in the digital age. Planned for the day before the American Sociological Association meetings in SF, the conference aims to bring together faculty, grad students, and industry professionals to share techniques related to data collection with the advent of social media and increased interconnectivity across the world.
I’m excited to say that Sociological Science, the new general audience open-access sociology journal, has published its first batch of articles. These include a great set of pieces, including one from my collaborator Chaeyoon Lim on network effects and emotional well-being. But the article “The Structure of Online Activism” by Lewis, Gray, and Meierhenrich caught my eye, for obvious reasons.
I’ve got some thoughts on this article, and following the philosophy of Sociological Science of encouraging “ex post corrections/comments over ex ante R&R demands,” here’s my response, which I’m also posting as a formal response on the Sociological Science site.
With season 6 of RuPaul’s Drag Race beginning exactly two weeks from today, it is officially the Drag Race preseason. I had lofty ideas for this season, like doing some elaborate forecasting from Twitter data à la the line of research that’s grown around elections forecasting. But little things (my dissertation) have limited the kind of commitment I can make to that endeavor.
Instead, I’m taking some inspiration from Jay Ulfelder and using a wiki survey to generate a forecast for the winner of season 6. I’m not really sure if a preseason forecast is actually a very good tool here — I’d venture the average Drag Race viewer isn’t well-versed in the careers of most of the queens who are appearing on this season. But there are definitely viewers who have some strong opinions formed already (like my RPDR viewing buddy Ryan) so I hope to get those folks voting within the next two weeks.
I present to you, thus, the RuPaul’s Drag Race wiki survey. Please share far and wide!
Laura K. Nelson wrote a nice review of my recent Mobilization article last week for the Mobilizing Ideas blog. She sums of some of the work that I had done in preparing the article and training the machine learning classifier for coding mobilization in the April 6th Movement’s Facebook messages.
Brayden King at Northwestern asked me to pass this on.
The Kellogg School of Management at Northwestern University seeks a post-doctoral researcher interested in at least one of the following areas of scholarship: social movements, collective behavior, networks, and organizational theory. We particularly encourage scholars to apply who have advanced quantitative training, programming skills, and familiarity with “big data” methods. The ideal candidate will have a PhD in sociology, communications, political science, or information sciences.
The post-doctoral position will allow the scholar to advance his or her own research agenda while also working on collaborative projects related to social media and activism. The post-doctoral position will be managed by Brayden King and will be affiliated with the Management and Organizations department and NICO (Northwestern Institute on Complex Systems). The term of this position is negotiable.
To apply, please e-mail curriculum vitae along with a brief statement of how your research interests are related to this position to Juliana Steers (firstname.lastname@example.org) with “MORS Post-Doctoral Position” as the subject. Arrange to have two letters of recommendation e-mailed to the same address. Salary and research budget are competitive and includes full medical insurance. Applications are due March 2, 2014.
Northwestern University is an Equal Opportunity, Affirmative Action Employer of all protected classes including veterans and individuals with disabilities.
This is a guest post by Charles Seguin. He is a PhD student in sociology at the University of North Carolina at Chapel Hill.
Sociologists and historians have shown us that national public discourse on lynching underwent a fairly profound transformation during the periods from roughly 1880-1925. My dissertation studies the sources and consequences of this transformation, but in this blog post I’ll just try to sketch some of the contours of this transformation. In my dissertation I use machine learning methods to analyze this discursive transformation, however after reading several hundred lynching articles to train the machine learning algorithms, I think I have a pretty good understanding of key words and phrases that mark the changes in lynching discourse. In this blog post then, I’ll be using basic keyword, bigram (word pair), and trigram searches to illustrate some of the changes in lynching discourse.
This is a guest post by Laura K. Nelson. She is a doctoral candidate in sociology at the University of California, Berkeley. She is interested in applying automated text analysis techniques to understand how cultures and logics unify political and social movements. Her current research, funded in part by the NSF, examines these cultures and logics via the long-term development of women’s movements in the United States. She can be reached at email@example.com.
Computer-assisted, or automated, text analysis is finally making its way into sociology, as evidenced by the new issue of Poetics devoted to one technique, topic modeling (Poetics 41, 2013). While these methods have been widely used and explored in disciplines like computational linguistics, digital humanities, and, importantly, political science, only recently have sociologists paid attention to them. In my short time using automated text analysis methods I have noticed two recurring issues, both which I will address in this post. First, when I’ve presented these methods at conferences, and when I’ve seen others present these methods, the same two questions are inevitably asked and they have indeed come up again in response to this issue (more on this below). If you use these methods, you should have a response. Second, those who are attempting to use these methods often are not aware of the full range of techniques within the automated text analysis umbrella and choose a method based on convenience, not knowledge.