Tenure-Track Position in Digital Humanities at California State University, Northridge

We’re hiring an assistant professor of Digital Humanities. Here’s a short description (follow the link above or at the end for the full announcement):

California State University, Northridge seeks candidates for a tenure-track assistant professor specializing in Digital Humanities skills (e.g. Mapping, Network Analysis, Data Visualization, Data Mining, Data literacy, Digital Scholarly editing). Secondary interests may include: Social Media Studies, Popular Cultural Studies, Computational Linguistics, Elementary Education, Russian Studies, Modern China Studies, Sustainability Studies. A PhD awarded prior to August 19, 2015 is required. The position will be housed in the Liberal Studies Program, and the successful candidate will be required to teach ITEP ((Integrated Teacher Education Program) courses in the candidate’s area of expertise from one of the ITEP disciplines (linguistics, humanities, natural sciences, mathematics, visual/ performing arts, or social sciences), as well as Popular Culture related courses on Social Media. Evidence of teaching effectiveness required; publication desirable. Standard teaching load is 4/4, although competition-based reassigned time is normally available. Applicants should demonstrate a commitment to working at a Learning-Centered University with a diverse student population drawn largely from the Los Angeles area. Please review full job announcement on the department website. Send cover letter,  CV, three letters of recommendation, brief writing sample (15-20 pages) or equivalent sample of scholarly digital work, abstract of a representative work (such as a book project or dissertation), a statement of teaching philosophy, and evidence of teaching excellence to or Dr. Ranita Chatterjee, Liberal Studies Program, CSUN, Northridge, CA  91330-8338. Primary consideration given to applications received by March 30, 2015. CSUN is an EO/AA employer.

The full position announcement is available at http://www.csun.edu/sites/default/files/LRS-Faculty-Digital-Hum.pdf.

How to Create Topic Clouds with Lexos

Some Background

Topic modelling is gaining increasing momentum as a research method in Digital Humanities, with MALLET as the general tool of choice. However, many would-be topic modellers have struggled to make effective use of MALLET’s output, which is raw data. In fact, there has been a growing movement to devise methods of visualising topic modelling data generally. A while back, Elijah Meeks had an idea for generating topic clouds: separate word clouds for each topic in the model. [I can’t seem to access his original blog post, but here is his code on GitHub. Although word clouds have their problems as visualisations, Meeks speculated that they were particularly effective for examining topics in a topic model. Indeed, others have used word clouds to visualise topic modelling results, most notable Matt Jockers in the digital supplement to his Macroanalysis. One of the things I liked about Meeks’ implementation using d3.js was that it placed the clouds next to each other so that they could be compared.

I quickly transferred this idea to our work on the Lexomics project, and our software Lexos. In Lexomics, we frequently cut texts into chunks or segments, which can then be clustered to measure similarities and differences. I thus borrowed Meeks’ topic clouds idea and created the Multicloud tool in Lexos to provide a visual way to compare the segments. Lexos allows you to slice and dice your texts and then generate word clouds of the resulting chunks–all in a web-based interface. For a long time now, I have thought it would be great to use Lexos as a tool for generating topic cloud visualisations as well.

However, this is not straightforward. In order to create word clouds from MALLET-generated topics, you need to transform the data from MALLET’s not-very-friendly output format to something that can actually be read by the d3.js script. In practice, it requires some programming skills. Lexos to the rescue! We’ve now given Lexos the ability produce topic clouds directly from MALLET data.

How to Do it Yourself

A word of warning. We’re still working on the user interface, so some of the specifics of the procedure may change slightly.

Before you begin, you will need MALLET to produce a file containing the word counts of each word in each topic. Many people don’t get this data. The GUI Topic Modeling Tool doesn’t produce it, so you have to use the command line version of MALLET. Even then, you might still not be getting the right data. If, like many, you use the code in the tutorial provided by The Programming Historian, you are not going to get the data you need. But it’s an easy fix.

Update (11 June 2015): Lexos can now process the MALLET “output-state” file, which everybody produces, whether they are following the Programming Historian tutorial or running the GUI Topic Modeling tool. It’s probably still a good idea to follow the instructions below because you have to unzip the “output-state” file before uploading it to Lexos (I recommend 7zip for this), and the file is much larger, which will increase the uploading time. 

Let’s take a look at the command provided by The Programming Historian:

The last portion is a good illustration of how MALLET commands work. You have a flag like –output-doc-topics, which tells MALLET to produce a file containing the topics in each document, and then you give it a filename, in this case, tutorial_composition.txt. So we need to tell MALLET to create a file with the word counts in each topic. The flag for this is –word-topic-counts-file. So simply add this and a filename to the end of the MALLET command:

Now run MALLET. You’ll get a word_topic_counts.txt file along with the rest of the topic model data. This is the file you’ll feed into Lexos.

Typically, you upload texts to Lexos for further processing, but since you already have your data file, you can head straight over to the Lexos Multicloud tool, where you can upload your MALLET file. (Note: At the moment, we’ve placed the upload function directly on the Multicloud page; that might change in the future.) You’ll see something like this:

Multicloud screen shotClick the radio button under Upload a Mallet Topic File, and then click the Upload button. Select your word_topic_counts.txt file or output-state.txt file and upload it. Then click Get Graphs. If you have a large number of topics, it may take awhile for the graphs to appear. Be patient. But that’s it!

Now for the bonus. If you mouse over the words, you’ll be able to see the word counts in tooltips. You can also re-order the clouds by dragging and dropping them into different locations. This is valuable because you can bring topics that might be sequentially distant (e.g. Topics 1 and 100) into greater proximity for easy comparison. Update (June 11 2015): I have learnt a lot about d3 word clouds over the past few months. I hope to have a follow-up post about them in the near future.

Update: I am constantly asked how to specify the number of keywords given in the MALLET output file, and I can never remember the answer. So I’m adding it here to make it easy to find. The argument

in the MALLET train-topics command will output 15 keywords.

Play as Process and Product: On Making Serendip-o-matic

I’m at the DH 2014 conference in Lausanne, Switzerland, and enjoying it immensely, despite cold and rainy weather which should be impossible in July. I’ve just delivered my paper “Play as Process and Product: On Making Serendip-o-matic” (abstract here), along with colleagues Mia Ridge and Brian Croxall (co-author Amy Papaelias couldn’t make it but contributed remotely). Iʼll blog more on the conference itself in a separate post, but for now I thought Iʼd put my portion of the presentation online. Hereʼs Miaʼs portion, and here Brian’s portion.

Play as Process and Product: On Making Serendip-o-matic

Hi, Iʼm Scott Kleinman, and my job is to introduce you to the One Week | One Tool experience which led to the creation of Serendip-o-matic. One Week | One Tool was a summer institute sponsored by the National Endowment for the Humanities. It was organised by Tom Scheinfeldt and Patrick Murray-John, and hosted by the Roy Rosenzweig Center for History and New Media at George Mason University. The idea for One Week | One Tool was inspired by models of rapid community development and advertised as a digital “barn-raising”, in which a diverse group of twelve DH practitioners would gather “to produce something useful for humanities work and to help balance learning and doing in digital humanities training.” The entire process from conception to release was to occur in six days. Last year, our group gathered for that brief period and brought into the world Serendip-o-matic.

The Serendip-o-matic Team:  Brian Croxall, Digital Humanities Strategist and Lecturer of English, Emory University; Jack Dougherty, Associate Professor of Educational Studies, Trinity College; Meghan Frazer, Digital Resources Curator at the Knowlton School of Architecture, Ohio State University; Scott Kleinman, Professor of English, California State University, Northridge; Rebecca Sutton Koeser, Software Engineer, Emory University; Ray Palin, Librarian and Teacher, Sunapee Middle High School; Amy Papaelias, Graphic Designer and Assistant Professor of Art, State University of New York at New Paltz; Mia Ridge, PhD Candidate in digital humanities in the Department of History, Open University, United Kingdom; Eli Rose, Computer Science/Creative Writing Major, Oberlin College; Amanda Visconti, Graduate Research Assistant, Maryland Institute for Technology in the Humanities; Scott Williams, Archivist, Yale Art Gallery; Amrys Williams, Visiting Assistant Professor, Department of History, Wesleyan University

Briefly, Serendip-o-matic searches large public image databases for objects containing user-submitted keywords in their metadata. Well, maybe that means something to this audience, but most people would be snoring by halfway through the sentence if not for the funky name “Serendip-o-matic”. The striking feature of this tool is that it is fun; from the name to the workflow, Serendip-o-matic avoids the dry academic approach in favour of an intuitive, playful user experience that requires no previous understanding of APIs or other technical details.

As Iʼm talking, Mia will give you a quick demonstration using a text based on the conference keynote abstracts. The thinking behind Serendip-o-matic is that it would replicate the serendipitous discoveries we used to make when wandering through a library archive and finding an unexpected treasure sitting on the shelf next to the book we were looking for. Serendip-o-matic aims to re-capture this experience in a digital context.

Although play was not part of its original conception, One Week | One Tool was almost by nature organised in a game-like framework since it suspended typical rules of scholarly activity and forced a group of mostly strangers to determine their activities in a rather ad hoc manner, rather like a THATCamp. But, unlike a THATCamp, we were tasked with building something from start to finish. Although we came from many different disciplines, we had to suspend many of our disciplinary inclinations and instead play whatever roles fell to us in the game. Propelled along by a fixed timeline and the need for a deliverable, we had minimal time for reflection or critical activity.

Nevertheless, our awareness of the artificial scenario in which we were placed increased as the week progressed, emerging almost organically from the physical spaces we occupied, the language we used, and the frames of reference we applied when we did pause to reflect. When we first met each other, Tom Scheinfeldt asked us to name our “super-powers”.

The Serendip-o-matic team describe their superpowers

For us, his terminology evoked the skills we might need to get the group through a heroic quest to produce a magical DH tool. We felt like participants in a Live Action role-playing game, each designated abilities and skills, and we began to conceive of Tom as a kind of Dungeon Master, who had created a world in which everyday life was suspended. Tom encouraged this by making it a purposeful design decision to keep us on campus for the first half of the week, much of which we spent holed up in a tower, emerging only periodically to worship before the altar of the Centerʼs well-stocked Keurig machine.

The Tower
The Center for History and New Media

With the clock ticking, we felt the pressure of a challenge mode in a game. This was play, but serious play in that it forced us to push ourselves.

As the week progressed, another metaphor began to take hold. The artificial ways in which we were forced to interact made us feel like participants in a reality TV programme, and we embraced this metaphor further by using social media to involve the public in the process. We blogged and tweeted often with teasers, never revealing exactly what we were going to produce. We also planned a public launch to build the suspense and anticipation. More importantly, we solicited feedback from the larger DH “community of practice” on the types of tools we were considering.

Our IdeaScale request for feedback publicised through DHNow.

As Scott Williams noted, it felt a bit like being in a fishtank, with people watching us and speculating about what we were doing. This raised the stakes considerably.

The results were unsettling for both the organisers and participants. Tom Scheinfeldt and Patrick Murray-John likened the development process to an out of body experience. Things just sort of happened, propelled along by the rather raw forces that had been set up by the world they had created. At the same time, we participants were acutely aware of our strange break from reality. For that one week, we were absolved from our duties at work, from the everyday rhythms of our lives; we focused only on the goal of coming up with an idea and making it work. This unsettling quality was probably created by a fundamental tension between our sense of having entered into Huizingaʼs “magic circle”, where the rules of play applied, and our sense that people in the outside world were keeping score. Our actions were defined by this “magic circle” but had repercussions outside it. That led to increasingly long working days in which we tended to become giddy with caffeine and the glare of our computer screens.

In this strangely stressful environment, we often vacillated between the kinds of pressures we tend to feel at work and the sense of che sarà sarà that we might adopt when playing a game. We quickly started to look for ways to make the process fun—so that that the pressure to produce did not make it feel like being at work—and gradually the desire to make the process playful led the manifestation of elements of play within the product itself. The machinery imagery in the design of Serendip-o-matic probably reflected the fact that we all felt like we were being put through the grinder, and the growing prominence of our sassy Hippo mascot helped keep things light.

The SerendHipp

Although we had not set out to do so going into One Week | One Tool, we began to produce in Serendip-o-matic a tool that was both playful and for play. So—now Iʼll turn the floor over to Mia, who will talk about how this played out (get the pun?) in the design of the tool.

Digital Humanities as Gamified Scholarship

The Digital Humanities trace their origins back to Father Roberto Busa’s efforts to analyse the works of Thomas Aquinas in the 1940s, which was then followed by further efforts to perform textual analysis with the aid of computers. Since that time, the Digital Humanities has expanded to encompass a myriad of other activities (and acquired its name in the process) and a devoted community of practitioners. Nevertheless, doubts persist about whether the growth of the Digital Humanities has had, or has the potential to have, any significant impact on scholarship in the Humanities as a whole.  Although I can’t say for certain, my feeling is that when doubters look back at the past, they tend to be thinking primarily of computational textual analysis as the method that has failed to obtain a wide impact. Whether this is a fair assessment of the Digital Humanities, or whether the appropriate criteria have been selected for assessing the significance for even this one area, is worthy of discussion, but my intention here is to look forward, rather than back. Computational textual analysis is beginning to evolve more rapidly, and to become more widely accessible to both students and scholars, meaning that the past should not be taken as an indication of the future.

The potential of computational textual analysis as a pedagogical tool is something that I will pass over quickly because it is a large topic. But I will mention three ways in which computational textual analysis can make an impact through classroom teaching. First, it provides a structure through which students can move between close and contextual readings, helping to students to achieve genuine insight without the benefit of years of study that more advanced students and established scholars enjoy. I have written about this elsewhere. Second, it increases exposure to computational textual analysis. If the method is to impact the broader field, there has to be a critical mass of students who are familiar with it. Third, computational textual analysis, like many skills taught in the Humanities, is transferrable to the workplace. However, in the case of computational analysis (and associated visualisation techniques), the value is arguably easier to perceive (and believe) by those doing the hiring. But enough on pedagogy. That topic is important enough to deserve a blog post of its own.

Here I want to focus on the scholarly impact of computational textual analysis as it is likely to take shape in the coming years. One of the most exciting developments is that the praxis is starting to acquire a theory—at least, something it is starting to look more acceptable qua theory by those who think the Digital Humanities are undertheorised. In their introduction to the recent special issue of DHQ on the Literary and the Digital Humanities, Jessica Pressman and Lisa Swanstrom present Franco Moretti and Jerome McGann as two ends of the methodological spectrum. Moretti’s notion of “distant reading” by constructing computational models of literary texts challenges traditional methodologies to interpret the texts without close analysis of their contents; McGann’s focus on the “textual condition” of the material objects of study forces us to confront the dynamic nature of textual forms, that our interpretations are based on a set of contingent and unique conditions like an individual performance. The latter leads to the concept of textual “deformance”—the intentional disruption of textual form in order to draw attention to meanings we might not have noticed otherwise. As it becomes easier and easier to deform digital texts algorithmically using computers, the possibilities for experimenting with textual deformance grow. This recognition underlies Stephen Ramsay’s concept of “screwmeneutics” (which elaborates more as “algorithmic criticism” in Reading Machines). Computers are tools from “screwing around” with texts, just to see what happens. In a sense, that is what Moretti is doing as well. By reducing texts to data points, and then mapping their relationships to each other, he seeks to discover insights he might have missed without this broad relational overview.

Whilst this screwing around with the text has something in common with the rhetorical “play” emphasised in some theoretical trends of the last few decades, today it is most often linked to the “performative” aspect of reading. But I want to focus here on another type of similarity between computational analysis and play, a similarity to video games, which would seem à propos, given their common use of digital technologies. A video game is an immersive, interactive environment, often a model or simulated version of the real world, but with some elements abstracted. The player enters the world of the game (what Huizinga called the “magic circle”) and manipulates elements of its world. There is now a well-established literature about the similarities and differences between video games and literature, much of it dealing with the native interactivity of video games. But it tends to be more concerned with the general experience of games and literature in the digital and non-digital media than with acts of scholarship as “play”. To understand what this entails, it seems to me useful to side-step to the adjacent field of gamification. Gamification refers to the incorporation of game-like elements into activities not otherwise considered to be games. The concept has a rapidly-growing following in the business world, where gamification is a strategy for increasing productivity, user satisfaction, and other measures of business success. The basic idea is that human activity is enhanced through fun. If our screwing around digitally with texts is like manipulating the elements of a video game for “fun”, then we have essentially gamified the scholarly process.

Some caveats before we get to the implications. I am not suggesting that traditional forms of reading and interpretation of texts are not immersive, or even that they are not interactive. I am merely suggesting that, due to the incorporation of digital technologies, the extent or quality of engagement shares something in common with video games. Nor am I suggesting that this engagement is ontologically different from traditional acts of reading and interpretation. My point is that it is worth exploring the implications of locating computational methods nearer to video games on the continuum.

What then are the implications of postulating a gamified form of computer-based interpretation? In For the Win: How Game Thinking Can Revolutionize Your Business, Werbach and Hunter postulate that gamification involves both an understanding of game design and an understanding of business techniques (9). Translated into the language of scholarship, the latter could easily equate to traditional forms of disciplinary knowledge. The striking addition, then, is the element of design. Perhaps this too has its equivalent in the rhetoric of chosen for publication, but in the digitally-enhanced world it can mean a great deal more. Taking the video game analogy at its most literal level, digital textual analysis requires the scholar to create a world in which the individual components can be manipulated by algorithms in internally consistent ways that both simulate and diverge from reality (i.e. that which lies outwith any world constructed for the scholarly purpose) or from any other world from which these components and algorithms are drawn. This design process again has its analogy in literary theory, but because its activities can seem more like the activities of other disciplines (often computer science and statistics), that analogy is sometimes lost. For those engaged in this world design, the ideological components of their activities tend to recede, prompting criticism from the more ideologically-oriented sectors of the Humanities. For the Humanities as a whole, there is a tendency for some scholars to dismiss this design process as atheoretical or unconcerned with the theories relevant to the Humanities. The former I find convincing. The latter I think reflects a tendency to refuse to accept that there are more things in heaven and earth than are dreamt of in their philosophies—and some of those things might be interesting. Regardless, the quality of design that makes it so distinct from the argumentative form scholarship that has dominated the Humanities since perhaps the latter half of the twentieth century is its practical nature, often leading to the construction of a tangible product the purpose of which is not explicitly persuasive. This is what prompted Stephen Ramsay to define Digital Humanities scholarship as “building”.

For many, this divergence from the standard scholarly paradigm stands alongside the element of “play” as a barrier to the engaging in Digital Humanities work. (In the case of computational textual analysis, fear—often culturally reinforced fear—of mathematics and coding also plays a role.) The impact of a tenure and promotion committee asking, “What is this?” and being told, “It was a game” or “I was screwing around with the texts” should not be underestimated. Gamification literature talks of “serious games” like flight simulators for training pilots, but at best this approach constructs gamified research as Hilfwissenschaft, preparatory work for the real business of scholarship and therefore of secondary importance for the purposes of professional advancement. I will nevertheless leave discussion of institutional barriers of acceptance to others. In my department, faculty come up for tenure and promotion on a five year cycle, and, barring any major blips, a single peer-reviewed article as an Assistant Professor and one as an Associate Professor will generally be enough for promotion to the next level. Yes, there is a price to be paid for this relatively easy process, but my point is that I don’t feel qualified to speak to the pressures faced by my colleagues at other institutions.

Instead, I want to think further about how games can be “serious”. In the business model of gamification, motivation is the key element: motivation for workers to perform better or for customers to engage more with the product. What might this look like for a gamified form of scholarship? A major feature of gamification is motivation. That is, the element of play encourages experimentation and productive (expected) failure, which enhances innovation and creativity. Textual analysis—if conceived of as a “serious game”, could be expected to deliver definitive answers in the same way a flight simulator might expected to help guarantee a safe landing. Running your texts through an algorithm could be considered a “dry run” prior to more traditional forms of interpretation. I don’t want to discount the value of this approach. As anyone who has done topic modelling knows, algorithms can generate lots of junk and noise alongside meaningful results, and the experimentation required to extract the latter is a good way to test one’s theories. If algorithms show female discourse in a text or corpus to be different from male discourse in ways that would surprise a feminist scholar, this needs to be addressed somehow, either through further experimentation or through reconsideration of the theory. The Humanities should provide answers to important questions, but it should also raise at least as many questions as it answers. Gamified methodologies help provoke these questions.

This is but to say that a more gamified type of scholarship can have an impact. But how would this work? In the case of text analysis, an impactful gamification would require transferring internal motivations to external ones. One way to do this is to make tool design part of the process. This is not to ensure that the data/results can be reproduced by others but to ensure that the game can be re-played multiple variations. Each game is not an exact match of the last; that’s not the point. The point is to spread the use of the game. A tool that others find useful (or fun) will be adopted more widely for Humanities scholarship, and it is hard to argue that a tool that is successful in this way has no impact.

But widespread adoption of a tool is unlikely to satisfy critics who see the “results” of its use not contributing to the discourse of the individual Humanities disciplines which initially motivated their creation. It may seem odd that digital humanists should be castigated for looking outward, rather than inward, from their home disciplines, but we do in fact want to make contributions to the fields in which we were trained. But we still have a long way to go in figuring out how the “game world” of can interact with the “real world” of scholarship. An effective means might be the creation of a community in which both circles interact (in the Lexomics project, we are attempting to build a community-based “best-practices” component to address this issue). But managing participatory communities has been a gamification challenge in the business world, and it is no less a challenge in the academic one.

A more easily tackled approach is to make the tool itself fun, motivating the “real world” scholar to temporarily enter the “game world”—just to see what happens. In games, part of the “fun” element of the game can be an aesthetic experience, and the design of the tool can certainly contribute to that experience. In text analysis, visualisations can play an important role. Reproductions of graphs based on text analysis are not themselves sufficient if they are boring to look at. They must excite the imagination. Both the tool designer and the tool user engage in (and collaborate in) acts of visual rhetoric. This product can produce a full range of responses, which become part of the experience.

It remains to be seen, how that experience is transferred to the “real world” of scholarship, transcending Huizinga’s magic circle. The “dry run” approach is one solution. The game is part of the process, not the outcome, and the experience is taken to inform other scholarly decisions. We might also look to the game mechanics. Being forced to analyse texts in an artificial setup requires the scholar to re-think categories of analysis or the status of the materials being analysed. There may be an analogy in this with the theoretical jargon which forms part of the rhetoric of much writing in the Humanities. But, as with theoretical jargon, engagement with the mechanics of the game can distract from the deliverables, so to speak. This was recognised in the recent “just the results” panel at the MLA Convention organised by the Association for Computers and the Humanities. Perhaps a self-consciously gamified form of scholarship will require us to think clearly about the effective separation of the procedural and presentational components of our scholarship.

But what must not be lost in this separation is the meaningfulness of the play. A game must offer meaningful experiences to engage its players, particularly problem solving. For scholarship, that means asking and trying to answer meaningful questions. Here is where there is considerable debate in the Digital Humanities community as to how much these questions have to relate to the traditional questions of the Humanities? Personally, I do not feel prescriptive on this issue. An important part of the game experience is making choices and finding out what happens. I would like to leave this space as wide open as possible. Initially, we needn’t think of our playful experiments as providing any necessary insight into our “real world” scholarship, nor should we let that scholarship impose strict constraints on our play. That defeats the purpose of the game. We are not building Camelot—only a model.

Gamification can suggest a number of strategies for demonstrating the relationship between our play and our scholarly endeavours. Performing at different levels (“levelling up”) and receiving badges—perhaps representing confidence in the statistical validity of our results, and the like—would be typical methods. Just as (ideally) workers create real innovations and businesses provide real-world rewards for progress in the games, so scholars might progress along a similar continuum of activity. But I am sceptical of these strategies (at least, in this relatively undeveloped account) because they inevitably privilege the reward over the process and diminish interest in the intervening steps. I also suspect that many digital humanists would now go further and suggest that those steps are essentially performative and need not be seen in teleological terms (that is, as a means to some higher scholarly end). Incorporating “play” in scholarship eventually blurs the boundaries between analysis, interpretation, and creativity. That is appealing to some, deeply disturbing to others. As of now, I find myself on the fence, wishing to think more deeply about how to negotiate the status of objects I produce through “scholarly play”.

This post was originally written over the summer when I had been working on a major update of the Lexomics textual analysis tool Lexos and had freshly read my friend Kevin Werbach’s For the Win (there’s my full disclosure with respect to the emphasis on gamification). I had also just finished work on the playful Serendip-o-matic for the One Week | One Tool project. However, the receipt of a major grant to produce a digital edition of a medieval manuscript turned my attention to an entirely different type of work: text markup using TEI. The work I have done on that project has delayed this post considerably, possibly at the expense of coherent thought (I hope not). The intellectual issues raised by trying to represent a manuscript in the form of a digital object are not entirely unlike those of computational text analysis, but I haven’t even begun to address them in this post. Rather than delay further and make this post even longer (and possibly even less coherent), I will simply get it into the blogosphere and hope to develop my ideas further in future posts.