Last week’s post on the metal collaboration network brought attention largely to the “giant component”–the largest subgraph in a network where all actors have at least one path to all other actors. In large networks, even sparse ones, giant components typically emerge and include the majority of actors in the network. While focusing on the giant component follows conventional practice while analyzing small world networks, perhaps worthwhile information can be inferred from actors outside of the giant component.

Continue reading

A couple of months ago a friend directed me to a piece by the New Yorker which included a nice interactive map depicting the landscape of craft brewing in the United States based on data provided by the Brewer’s Association. Using this data, what can we say about the geography of craft brewing in the United States?

To get started, let’s look at the distribution of craft breweries at the state level:

lnbp500k

There are a couple of points to note here. First, I’ve omitted Alaska and Hawaii. Disconnected observations can become a bit of a headache when working with some of the methods described below. There are ways around this, but for the purposes of this post I’d prefer to set these issues aside. Second, instead of dealing with raw counts, I’ve taken the log of the number of craft breweries per 500,000 people. Finally, I’ve used Jenks classification to sort states into five categories, each of which is associated with a color determined by the ColorBrewer routine implemented via the brewer.pal command include as part of R’s RColorBrewer library.

Just looking at the map, we can see clusters of high-craft-brewing states in New England and the Pacific Northwest, as well as a large cluster of low-craft-brewing states in the South. We can begin to quantify these patterns by using a set of exploratory techniques designed to capture the extent to which observations which share similar values on a given outcome also tend to share a similar spatial location.

Continue reading

datagothamThis is a guest post by Sean J. Taylor, a PhD student in Information Systems at NYU’s Stern School of Business.

Last Thursday and Friday I attended the 2nd annual DataGotham conference in New York City. Alex Hanna asked me to write about my experience there for the benefit of those who were unable to attend, so here’s my take on the event.

Thursday evening was a social event in a really sweet rooftop space in Tribeca with an open bar and great food (a dangerous combination for this still-grad-student). Though I spent a lot of the time catching up with old friends, I would describe the evening as “hanging out on Twitter, but in person.” I met no fewer than a dozen people I had only previously known online. I am continually delighted at how awesomeness on Twitter is a reliable indicator of awesomeness in-person. Events like DataGotham are often worth it for this reason alone.

Continue reading

A few months ago I started listening to Tomahawk, a band described on Wikipedia as “an experimental alternative metal/alternative rock supergroup.” Beyond the quality of their music, I found myself intrigued by the musical background of their members. In addition to Tomahawk, their other bands include acclaimed groups such as Faith No More, Helmet, the Melvins, Fantômas, and the Jesus Lizard. Mike Patton alone has been affiliated with at least fifteen bands. Continue reading

UPDATE 2013-10-01: Nate Porter pointed out that the Hacker League page doesn’t let you sign up. For now, use this Google doc.

A lot of folks on Twitter during ASA this year were chatting about the possibility of a hackathon during ASA 2014 in San Francisco. The reasons for having a hackathon, I think, are myriad; here are some of the various “purposes” that myself and members of the computational sociology listserv have considered:

  • Incorporate computational methods into social science through teh h4x
  • Inspire participants to apply computational methods to common social science problems
  • Create an organizational nexus for computational sociology which makes it a vibrant and visible part of the discipline
  • Develop and foster social ties that strengthen the field and point to the value of non-traditional venues for collaboration
  • Create useful and interesting research products.
  • Solidify connections among sub-community of folks in/around sociology who have a set of skills/tools/interests in things computational
  • Increase visibility of that sub-community, partly by showcasing what can be done
  • To support claim that sociology has a role to play in computational social science and that computation has a role to play in sociology.
  • Connect folks already immersed in these skill areas with folks who are around the edges, curious, etc.
  • To actually impart some new skills/ideas to folks.
  • To actually produce something collectively useful.
  • To lay foundation for something that could grow in future years at ASA meetings or in ASA in general (e.g., a network of folks working with these tools).

I’m really excited about the prospect of this. Laura NorénChristopher Weiss, and I have been plotting to make this thing a reality. Right now we’re trying to gauge how many people would come out to such an event.

If you have even a tiny inkling that you might come to the hackathon, sign up at the Hacker League page.

As mentioned in a previous post, Alex Hanna and I had the opportunity to teach last week at the Higher School of Economic’s International Social Network Analysis Summer School in St. Petersburg.  While last year’s workshop emphasized smaller social networks, this year’s workshop focused on online networks.  For my part, I provided an introductory lecture to social network analysis along with four labs on the subject of R and social network analysis.

The introduction to social network analysis began with an historical overview, followed by outlining which concepts constitute a social network.  The remaining portions review subjects relating to subgraphs, walks, centrality, cohesive subgroups, along with major research subjects in the field.  Setting aside the substantive interest in networks, the first lab covered basic R usage, objects, and syntax.   Admittedly, this material was relatively dryer, though necessary to make the most of the network analysis software in R.  We followed this introduction to R with an introduction to R’s social network analysis software.  This second lab introduces the class to the different network packages within R, reading data, basic measurements brought up in the introductory lecture, and visualization.  The third R SNA lab was on the subject of graph-level indices, random graphs, and Conditional Uniform Graph tests.  Both the second and third labs were conducted primarily using the igraph package.  The fourth and final lab of the course was on the subject of exponential random graph modeling.  For this lab, we walked through tests for homophily and edgewise-shared partner effects using data on both our Twitter hashtag (#SNASPb2013) as well as US political blogs.

The slides include scripts that download and read the data used within all lab examples.

I’ve hosted PDFs of all the slides on Google Drive.

Benjamin Lind and I have spent the last week and a half teaching at the Social Network Analysis Summer School at HSE-St. Petersburg. We’ve had about 30 students coming from as far as South Africa and Sweden, with all levels of skill and many different research interests, and have had the pleasure of teaching with some great instructors from around the world as well. You can read the backchannel chatter the #SNASPb2013 hashtag

I ran two labs on collecting network data from various Internet sources with Python. The first is a mashup of some of my prior workshops on collecting Twitter data via the API, and drawing network data through user mentions. The second shows how to retrieve network data by crawling blogs.

Technology-wise, it was my first time using a cloud service (Amazon EC2) and iPython Notebooks for teaching purposes. A few observations into EC2 for teaching: the t1.micro server level is not quite powerful enough to handle ~30 students running parsing of JSON or scrapy. So you’ll have to up the juice, otherwise. I found iPython Notebooks to be great, though — code highlighting and execution, LaTeX typesetting, and Markdown makes it a winner in my book.

I also put the code for each lab on GitHub: hse-twitter and hse-scrapy. Would love any contributions to these small scripts, especially the scraping code.

The ASA annual meeting starts on Friday, and the program is about 200 pages long. But don’t worry, we’ve got you covered. Here’s a few computational sociology events that you should catch, suggested by folks on the computational sociology listserv.

If you know of any more that look interesting, feel free to post them in the comments and I’ll add them to this Google Calendar.

Seeing Shamus Khan and Phil Kasinitz’s ASA eating guide, I asked a colleague and friend of mine, Grace Nguyen, (a former chef and sometimes New Yorker) to put together a list of good, cheap(er) food options for NYC in preparation for this year’s ASA.
Here’s her compilation. A few more suggestions from her may be forthcoming in the comments. The places are linked to their Yelp pages.
You should also hit up some of these places after the Bad Hessian party, since you’ll be in the Village anyhow.

There have been repeated calls for “space” in many fields of social science (all links are behind paywalls, sorry):

  • Demography: (Voss 2007)
  • Sociology: (Gieryn 2000)
  • Epidemiology: for an early critical review (Jacquez 2000)
  • Geography: obviously geographers were into space before it was cool. A couple of pieces I like are Doreen Massey’s book, For Space and O’Sullivan (2006) for a review of GIS.
  • Anthropology: the proceedings of a conference including a piece by Clifford Geertz, Senses of Place (1996). Though what I’m writing here has less to do with the space/place debate.

These are nice papers about what the authors think should be new research agendas, bu I think social sciences need to stop calling for space and start “playing” with space. Let me explain…

This idea started when fellow Bad Hessian, Alex Hanna, suggested that I read a paper about spatio-temporal models of crime in Chicago. We are in the same writing group. Alex has suffered through many presentations of a paper I’m writing about crime in Chicago. Chicago? Crime? I mean these have to be related papers, right? So I gave it a quick read:

Seth R. Flaxman, Daniel B. Neill, Alex J. Smola. 2013. Correlates of homicide: New space/time interaction tests for spatiotemporal point processes. Heinz College working paper, available at: http://www.heinz.cmu.edu/faculty-and-research/research/research-details/index.aspx?rid=483

…and it’s a really great paper! Flaxman reviews three standard measures for spatial and temporal independence and then proposes a new measure that can simultaneously test for spatio-temporal dependence. The measures are validated against real crime data from Chicago. On the other hand, it’s also completely useless for my project. I mean, I stuck it in a footnote, but I can’t engage with it in a substantively meaningful way because my paper is about the Modifiable Areal Unit Problem and the good ol’ MAUP is fundamentally about polygons — not points. The MAUP occurs because a given set of points can be aggregated into any number of different polygon units, and the subsequent results of models, bivariate relationships, or even hot spot analysis might change based on the aggregation method.

This means that Flaxman’s approach and my approach are not comparable because they each rest on different assumptions about how to measure distance, social interaction, and spatial dependence. They’re based on different spatial ontologies, if you will. But back to the main argument of this post: could we play around with the models in Flaxman, the models I’m making, plus some other models in order to test some of the implications of our ideas of space? Here are some hypothetical hypotheses….

Isotropy. Isotropy means that effects are the same in every direction. For example, weather models often take into account anisotropy because of prevailing wind direction. As Flaxman mentions at the end of the paper, alternative distance measures like Manahatten distance could be used. I would take it a step further and suggest that distance could be measured across a trend surface which might control for higher crime rates on the south side of Chicago and in the near-west suburbs. Likewise, spatial regression models of polygon data can use polynomial terms to approximate trend surfaces. Do the additional controls for anisotropy improve model fit? Or change parameter estimates?

Spatial discontinuities. A neighborhood model posits — albeit implicitly and sort of wishy-washy — that there could be two locations that are very close as the crow flies, but are subject to dramatically different forces because they are in different polygons. These sharp breaks might really exist, e.g. “the bad side of the tracks”, red-lining, TIFF funding, empowerment zones, rivers, gated suburbs. Or they might not. Point process models usually assume that space is continuous, i.e. that there are no discontinuities. Playing around with alternative models might give us evidence one way or another.

Effect decay. In spatial regression models like I’m using, it’s pretty normal to operationalize spatial effects for contiguous polygons and then set the effect to zero for all higher order neighbors. As in the Flaxman paper, most point models use some sort of kernal function to create effect estimates between points within a given bandwidth. These are both pretty arbitrary choices that make spatial effects too “circular”. For exmple, think of the economic geographies of interstate exchanges in middle America. You’ll see fast food, big box retail, gas stations, car dealerships, hotesls, etc. at alomst every interchange. Certainly there is a spatial pattern here but it’s not circular and it’s not (exponentially, geometrically, or linearly) decaying across distance. Comparisons between our standard models — where decay is constrained to follow parametric forms — and semi-parametric “hot spot” analyses might tell us if our models of spatial effects are too far away from reality.

Ok. Those sound like valid research questions, so why not just do that research and publish some results? As I see it, spatial work in social sciences usually boils down to two main types of writing. First, there are the papers that aren’t terribly interested in the substantive research areas, and are more about developing statistical models or testing a bunch of different models with the same data. Here are some examples of that type:

  • (Dormann et al 2007) undertake a herculean task by explicating and producing R code for no less than 13 different spatial models.
  • (Hubbard et al 2010) compare GEE to mixed models of neighborhood health outcomes.
  • (Tita and Greenbaum 2009) compare a spatial versus a spatio-social network as weighting matrices in spatial regression.

The problem with this approach is that the data is often old, simplified data from well-known example datasets. Or worst yet, it is simulated data with none of the usual problems of missing data, measurement error, and outliers. At best, these papers use over simplified models. For example, there aren’t any control variables for crime even when there is a giant body of literature about the socio-cultural correlates of spatial crime patterns (Flaxman and I are both guilty of this).

The second type of research would be just the opposite: interested in the substantive conclusions and disinterested in the vagaries of spatial models. They might compare hierchical or logistic regressions to the spatial regressions, but very rarely go in depth about all the possible ways of operationalizing the spatial processes they’re studying. And when you think about it, you can’t blame them because journal editors like to see logical arguments for the model assumptions used in a paper – not an admission that we don’t know anything about the process under study and a bunch of different models all with slightly different operationalizations of the spatial process. But here’s the thing: we don’t actually know very much about the spatial processes at work! And we have absolutely no evidence that the spatial processes for, say, crime are also useful in other domains like educational outcomes, voting behavior, factory siting, human pathogens, or communication networks.

Thus, we don’t need more social science papers that do spatial models. We need (many) more social science papers that do multiple, incongruent spatial models on the same substantively rich datasets. I think it’s only by modeling, for example, crime as an isotropic point process, a social network with spatial distance between nodes, and a series of discrete neighborhood polygons can we start to grasp if one set of assumptions about space is more/less accurate and more/less useful. In case you couldn’t tell, I’m a big fan of George Box’s famous quote. This is the slightly longer version:

“Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.” (Box & Draper 1987, 74)

Good luck, and go play!

[Update, 2013-07-22: I changed the citation to the Flaxman paper, as it is now a working paper in his department at Carneige Mellon University.]