How to Create and Cluster Topic Files in Lexos

This post is a follow-up to last year’s How to Create Topic Clouds with Lexos, where I showed how Lexos can be used to visualise topic models produced by Mallet. From time to time, colleagues have wondered whether it would be possible to use Lexos to perform cluster analysis on the topics Mallet produces. The motivation for doing this is simple enough; topics are often very similar, and it would be useful to have some statistical measure of this similarity to help us decide where groups of topics really should be interpreted under some meta-class. Some added urgency has arisen in discussions for the 4Humanities WhatEvery1Says Project, which is topic modelling a large collection of public discourse about the Humanities. We’ve begun considering whether doing cluster analysis on topic models can help us to refine our experiments.

The first step is finding a way to massage the Mallet data into a form we can submit to clustering algorithms. Lexos already transforms Mallet output into a topic-term matrix, which is then used to make word clouds using the top 100 words in each topic. Essentially, the topics are treated just like text documents (or at least slices of them).… Read more…

Continue reading


How to Create Topic Clouds with Lexos

Multicloud2banner

Some Background

Topic modelling is gaining increasing momentum as a research method in Digital Humanities, with MALLET as the general tool of choice. However, many would-be topic modellers have struggled to make effective use of MALLET’s output, which is raw data. In fact, there has been a growing movement to devise methods of visualising topic modelling data generally. A while back, Elijah Meeks had an idea for generating topic clouds: separate word clouds for each topic in the model. [I can’t seem to access his original blog post, but here is his code on GitHub.] Although word clouds have their problems as visualisations, Meeks speculated that they were particularly effective for examining topics in a topic model. Indeed, others have used word clouds to visualise topic modelling results, most notable Matt Jockers in the digital supplement to his Macroanalysis. One of the things I liked about Meeks’ implementation using d3.js was that it placed the clouds next to each other so that they could be compared.

I quickly transferred this idea to our work on the Lexomics project, and our software Lexos. In Lexomics, we frequently cut texts into chunks or segments, which can then be clustered to measure similarities and differences.… Read more…

Continue reading


Play as Process and Product: On Making Serendip-o-matic

I’m at the DH 2014 conference in Lausanne, Switzerland, and enjoying it immensely, despite cold and rainy weather which should be impossible in July. I’ve just delivered my paper “Play as Process and Product: On Making Serendip-o-matic” (abstract here), along with colleagues Mia Ridge and Brian Croxall (co-author Amy Papaelias couldn’t make it but contributed remotely). Iʼll blog more on the conference itself in a separate post, but for now I thought Iʼd put my portion of the presentation online. Hereʼs Miaʼs portion, and here Brian’s portion.

Play as Process and Product: On Making Serendip-o-matic

Hi, Iʼm Scott Kleinman, and my job is to introduce you to the One Week | One Tool experience which led to the creation of Serendip-o-matic. One Week | One Tool was a summer institute sponsored by the National Endowment for the Humanities. It was organised by Tom Scheinfeldt and Patrick Murray-John, and hosted by the Roy Rosenzweig Center for History and New Media at George Mason University. The idea for One Week | One Tool was inspired by models of rapid community development and advertised as a digital “barn-raising”, in which a diverse group of twelve DH practitioners would gather “to produce something useful for humanities work and to help balance learning and doing in digital humanities training.” The entire process from conception to release was to occur in six days.… Read more…

Continue reading