Thoughts from the Reading Pile: Susan Jacoby’s The Age of American Unreason

Over the weekend, I started reading Susan Jacoby’s 2008 book The Age of American Unreason. It identifies and traces the resistance to science-based, rational decision-making in the American public from the perspective of a historian. Thus far it’s an engaging read, though it’s run into a few hiccups reading it from a 2014 perspective. 

I’m still early in the book, but one thing that has jumped out at me is Jacoby’s lament that reading is no longer valued as it once was:

What kind of reading has exploded on the Internet? Certainly not the reading of serious books, whether fiction or nonfiction. The failure of e-books to appeal to more than a niche market is one of the worst kept secrets in publishing, in spite of the reluctance of publishers to issue specific sales figures. Even a popular mass-market novelist like Stephen King has flopped on the Web. In 2001, King attempted to serialize one of his supernatural thrillers online, with the proviso that readers pay $1 for the first three installments and $2 for the subsequent portions. Those who downloaded the installments were to pay on an honor system, and King pledged to continue serialization as long as 75 percent of readers paid for the downloads. By the fourth installment, the proportion of paid-up readers dropped to 46 percent, and King cancelled the series at the end of the year. King’s idea of serialization had of course been tried before, and it was a huge success–in the nineteenth century. London readers used to get up early and wait in lines for the newest installment of a novel by Charles Dickens; in New York, Dickens fans would meet the boats known to be carrying copies of the tantalizing chapters. The Web, however, is all about the quickest possible gratification; it may well be that people most disposed to read online are least disposed to wait any length of time for a new chapter of a work by their favorite writer.

Susan Jacoby, The Age of American Unreason (2008): 16-17.

Leaving aside the issue of what counts as a ‘serious book’ (it is at least as old as the first lurid broadsheet off the printing press), this whole line of argument dates the book in fascinating ways. It’s only been six years since this was published, and yet the technological changes in between have opened an enormous gulf.

Our relationship to reading and the internet has shifted in the last few years, in no small part due to improvements in microprocessor technology. It has made smartphones widely affordable, and paved the way for the release of Apple’s iPad in 2010, as the first practical, consumer-level tablet computer. Furthermore, in late 2007 when this book would likely have just wrapped up edits and been in preparation for final publication, Amazon released the first generation Kindle ebook reader.

These three technologies, dedicated ebook readers, smartphones and tablets, have had an enormous impact on ebook sales. While the debate on the comparative merits of print versus ebooks is still going strong, the Pew Research Center’s 2014 report on ereading shows ebooks are growing in popularity in the United States, and have become a significant part of the market. This same report shows that about 76% Americans have read at least one book in any format in the last year, and that ebook reading in particular has increased from 17% to 28% between 2011 and 2014. I contest that, rather than destroying reading by encouraging instant gratification, the internet has become a means of satisfying that desire where reading is concerned. 

I can’t hold Jacoby’s assessment of the internet and ebooks against her, though I’m curious whether she’s revised her view since this was published. I can, however, take issue with her comparison of Charles Dickens and Stephen King. 

Dickens and King come from two entirely different publishing environments. It’s easy to forget, I think, just how revolutionary and important mass printing technology was to culture prior to the advent of radio and moving pictures, never mind television and the internet. At the time Dickens was publishing, magazines and penny dreadfuls (http://en.wikipedia.org/wiki/Penny_dreadful) (cheaply produced, sensational fiction publications) were an enormously popular form of mass entertainment. Consider, for example, the public uproar when Sir Arthur Conan Doyle dared to kill off Sherlock Holmes. Serial fiction inspired the kind of passion we now direct at television, movies and video game franchises (see Firefly, Veronica Mars, Lord of the Rings, Star Wars, Halo, Mass Effect and so forth).

Stephen King is popular, no doubt about it, but it’s hard to see how comparing his 2001 experiment in web serializing, The Plant , is any more than superficially similar. At the time, ebooks were still primarily appealing to a niche market, not an established force in popular culture. The Green Mile, which was published originally as a set of six short mass-market paperbacks spaced a month apart, is the better analogue though it lacked simultaneous ebook editions. In contrast with The Plant, which King abandoned after only six parts due to disappointing sales, The Green Mile was a best seller. It also won the 1996 Bram Stoker Award for Best Novel, received nominations for both the British Fantasy and the Locus Awards in 1997, and was adapted into a successful and critically well-received film a few years later – clear metrics of success.

Rather, I think that King’s web serial was the right idea at the wrong time. The notion has recently been revived to better success, though it remains to be seen what will come of it. In 2012, science fiction and fantasy publisher Tor announced that they would be publishing John Scalzi’s next novel, The Human Division, as an ebook serial on a subscription model. I couldn’t find sales figures in the short time I have to devote to a blog post, but it was a successful experiment in Scalzi’s assessment, and Tor announced a second ‘season’ to continue the story shortly after the final episode was released. 

I’m interested to see where the ebook market goes in the next several years; the shifts in technology typified here formed a cornerstone of my Masters thesis. I am particularly interested to find out what things we ‘know’ now will be proven laughably wrong in hindsight. It’s entirely possible there is something, or several somethings, that I’ve written future readers will find as peculiar as Jacoby’s views on reading in 21st century America.

TAPoR, Historic Tools and Going Out on a High Note

Last week, I had the singular pleasure of seeing a prominent DH scholar comment favourably on an aspect of the TAPoR 2.5 portal I’m particularly proud of. 

Over the last few years, I’ve been getting deep into digital tools used by humanities scholars, both for the purpose of populating the TAPoR site with resources, and as part of the project’s Just What Do They Do? mandate to discover how scholars use and relate to them. About a year and a half ago, it expanded from just contemporary and well-know tools to include a whole range of historic tools stretching back to the 1950s.

The historic tools section of the site quickly became my biggest contribution to the project. It all comes back to a Digital Humanities journal called Computers and the Humanities (often called CHum), which was in publication from 1968 through 2004. Dr. Rockwell, the project lead, pointed me toward it in late 2012; at the time, much of my time was spent building up a corpus for content analysis on computer-based tool use and development, and this journal is a particularly rich source of information.

As these things happen, before long I was not only collecting articles, but adding tool after tool to the TAPoR portal. At this time, the site’s collection was still composed largely of the TAPoRware and Voyant toolsets, plus a growing number of well-known contemporary tools for text analysis, visualization, concordance and so on. With CHum as a resource, we all of a sudden had enough information to include older tools as well. 

Soon, I had added dozens. Not just the famous ones, like TACT, the Oxford Concordance Program, and COCOA, but lesser-known tools like URICA! II. Many were developed in the 1960s or 1970s and passed out of use alongside the mainframes and punched card systems that ran them.

By the summer, I realized I had a chance to do more, one that went beyond just cataloguing tools. When I first started on TAPoR, I was writing tool reviews for the nascent collection, based on direct testing. It was pretty dry, formulaic stuff to write, though I have no doubt it’s helped some visitors decide whether the tools in question were worthwhile pursuing for their own work. For the historic tools, I could do something a bit more interesting. In place of direct testing, I had 40 years of scholars’ commentaries, reviews and development-based papers. As a literary scholar and historian by training, I found the prospect of digging into this trove information exciting and valuable.

Fortunately, Dr. Rockwell agreed. He gave me the go-ahead to start ‘reviewing’ the older tools. 

I’ve completed and posted 34 to date (the full list is available here). In these historic tool overviews, I’ve presented as complete a picture as I can of each tool’s functions, reception and applications, each based on as much direct information I could get from their developers, reviewers and users. Sometimes it wasn’t much, depending on how well-documented the tool was, but my work has brought a wide range of tools back into focus for other scholars to explore.

Even if it had stopped there, I’d still be happy with what I’ve done. I managed to take it a step further, though. Over the winter, I proposed a network analysis of the entire CHum corpus, showing tool interconnections, influences and genealogies, going back to when all scholars had to work with was custom algorithms in programming languages like PL/1 and FORTRAN. It ended up being a major part of TAPoR’s research work up to the spring of 2014, made possible with Dr. Rockwell’s support and a great deal of scraping, statistical analysis and visualization work from my fellow Research Assistant Ryan Chartier. It’s already been presented at this year’s CSDH/SCHN conference, and it will eventually be a paper; exciting stuff for a Masters-level RA.

I’m not sure how much longer I’ll be able to stay on the project, but it’s a good note to end on. I’m proud to have been able to do as much as I have for it.

I doubt Alan Liu will ever see this, but just in case: Thank you for your kind words. It meant a lot to me that I’ve been able to make a notable contribution, however small, to a field I’ve grown to love.

Reflections on a Digital Conference

Or, The Trials and Tribulations of Live-Tweeting

A few weeks ago, the Contemporary Ukraine Research Forum project wrapped up with a conference. 

The Forum itself is a bit of an experiment, and the conference was no different. It was conceived of and conducted as an international exchange of ideas, with academics participating from several institutions in Alberta, Canada and Ukraine. 

During the course of the project, we met monthly via video conference. These were occasionally challenging to schedule due to the number of conference rooms to coordinate, never mind the time difference between Edmonton and Kyiv, but despite a few glitches, they proved an effective and valuable component of the project. It was a great way to put names to faces, and ensure everyone involved was aware of what their colleagues were doing.

Thus, it was a natural extension of these video conferences to present research papers at the concluding conference the same ways. The project’s coordinating committee collected video from each presenter, and arranged to have the whole proceeding broadcast over LiveStream, interspersed with commentary and introductions from participants at each institution’s video conference centres. 

I was involved in this process in a technical capacity. I updated the website with announcements from the coordinating committee, set up a page indexing the presenters’ abstracts, made blog posts with announcements and further information, and created a presenter gallery cross-referencing portraits to each person’s talk. For the LiveStream, I set up a page with the stream embedded in it alongside social media widgets for discussion. 

On the day of the conference, I attended at the University of Alberta’s conference room to monitor the website for comments, make any necessary last-minute updates, and to live-tweet the event while the Project Coordinator did the same on Facebook.

From my perspective, the conference proceeded smoothly. I was able to devote the vast majority of my time to updating the project’s Twitter account (@EuromaidanForum) with information on and salient points from each speaker. 

It was intense.

Twitter is a great medium for providing capsules of information as events happen. It forces you to distill information down and hone in on the most important parts. This requires the ability to swiftly discern and capture points, and the judgement to pick up on when to give up and move on to the next point.

It was challenging to maintain active listening and simultaneously write out speakers’ points in a form suited for Twitter. I swiftly began copy/pasting the speaker’s name followed by a colon to the beginning of each tweet, so I could save time typing and move on to capturing the topic at hand. It helped immensely, but came with its own problems – in one instance, I didn’t realize until much later that I’d copied a misspelling of one person’s name. The error, which normally would have jumped out at me, was lost in the flurry to write out his points. 

Another challenge was that I couldn’t always pick up on a person’s points or details about them to present it on the Twitter stream. Faced with a barrage of information, I rarely had the opportunity to ask others for clarification, and often simply had to move on or risk losing the thread entirely. As such, some speakers were better represented than others, something I regret particularly as a native English speaker attempting to represent terms and concepts from native Ukrainian speakers presenting in English. Pausing to check my spelling on an unfamiliar Ukrainian term tripped me up more than once, which I feel was a disservice to those speakers and their work.

One thing that particularly helped on this front was input from academic observers. On a few key instances, I was able to re-tweet their observations, which provided a welcome alternate view and enabled me to provide valuable commentary to followers in instances where my own capacity had stumbled. This also required some snap judgements regarding what to include or pass by, but for the most part was a welcome supplement.

However, commentary from others had its drawbacks as well. In one instance, the forum account had an argument tweeted at it when observers disagreed with a speaker’s point, and the account’s notifications were peppered for a while with the exchange. It was at once fascinating, from an academic perspective, and highly distracting.

As a whole, the event was an exercise in integrating social media. Despite the challenges, I received positive feedback from the coordinating committee and others. My impression is that it was overall a valuable addition to the conference, and I’m pleased to have ended my time on the project on such a high note.

For those interested, a recording of the conference is available to watch over LiveStream. I have also collected my live-tweets via Storify.

On Creating in WordPress

A Few Thoughts on WordPress versus Hand-Coding

In the last several months, I’ve had the opportunity to go in-depth with WordPress as a web development platform. 

This represented a departure for me. Ever since I first learned HTML, back in the days of Geocities and table-based layouts, I’ve been most at ease hand-coding my pages. While I’ve used web development software before, I appreciate the control and depth of understanding hand-coding permits.

Which is not to say that it is without drawbacks. Designing and modifying a website in a text processor is time-consuming, even with a modular design and PHP to stitch it all together. It’s easy to introduce errors; all it takes is one mis-spelled semantic element, or misremembering one piece of CSS syntax. I’ve spent many hours tweaking box element positions on a stylesheet even when all is going well.

Yet, it’s easy, too easy, for me to get caught up with layouts and structure at the expense of the site as a whole. Spelling and grammar are harder to proof, for one. More critically, it’s harder to step back to see the overall effectiveness of a site when I’ve spent hours working with lines of code for just one part of one page. To be fair, some of that may be an artifact of being just one person, and never mind one who is still learning in many respects.

By contrast, working with WordPress emphasizes content. Instead of creating a layout from scratch, users choose a preset. Instead of writing a new file for each page, users choose from a short list of templates, then proceed immediately to formatting their text and adding media. 

It is simultaneously restrictive and strangely freeing.

When I started working with the Contemporary Ukraine Research Forum project, they had no website to speak of, just a space on the University of Alberta’s ARC servers and a WordPress install. All it took for me to get the site roughed in and ready for project content was a Skype call with the coordinating committee and a few afternoons of creating pages, setting up menus and roughing in the sidebars with example content. By contrast, it took me weeks of dedicated work to get the case study website I designed for my thesis structured and styled.

Despite the time it took to get the formatting just so in WordPress’ page editor (amazing disappearing non-breaking spaces! Header tags mysteriously applied to whole paragraphs!), it was almost absurdly fast to get each new page populated and ready to go. No messing with positioning, no time spent searching out where I missed a close tag in an unordered list, no playing around with formatting individual CSS classes, just a polished page. 

I’m of two minds on this. It’s great to have it done in relatively short order, ready for the world to see. However, I feel like I’ve invested very little of myself in the pages, beyond the images and text. With WordPress, and especially with the free version I’m currently using, there is a limited capacity to alter layouts, change fonts, experiment with colour schemes, or refine how the pages behave under various devices or browsers. 

It’s great to have a portfolio up and running so quickly, but at the same time, I built my first one from scratch for good reason. It was a chance to not only demonstrate that I can build a competent, if basic, website from scratch, but also a place to refine and practice my HTML and CSS. That said, I also learned a great deal from running the Contemporary Ukraine Research Forum site as an admin; it gave me the broader view of good site design by distancing me from the level of code.

I’m still deciding how I want to go forward from here. I like WordPress, despite its limitations, even though I miss getting deep into the code level. Yet, I still value my old portfolio, and I regret the loss of both it and my thesis case study website with the end of my university hosting.

Perhaps it’s simply time to invest in a domain and hosting. Transfer over my older work, and give myself the ability to effect greater control over my WordPress work. Worst case scenario? I develop a passion for WordPress theme development.

Pardon My Dust

Today marks the official re-opening of this blog. 

When I began this WordPress site, it was bare-bones, no more than a place to hold textual thoughts as I explored programming concepts from a newly post-Masters perspective. I didn’t bother with anything but the most basic setup on the logic that I would be best off focusing exclusively on the words while I built up my writerly momentum. 

Even with this low level of commitment, life got into the way.

It happens. Blogs are started and abandoned with regularity; it takes a particular kind of person to keep one going for the long haul.

So, why revive it now?

It’s a straightforward tale. This site fulfils a more urgent need than it once did, and I now have more time to devote to it. Mercenary reasons are as good as any.

In short, I recently finished up work on a major web development contract. It was time to update my web portfolio to reflect it, and I soon discovered that my post-Masters self no longer has login rights to make the changes.

Moving on from my University-hosted homepage is, at the end of the day, a good thing. It’s long past time I investigated other options. I’m proud of the work I did there, but I’m glad I’ve had the kick I needed to expand beyond it.

I’ve spent the last few weeks revamping this site to act as a new portfolio. I’ve lived and breathed WordPress the last six months, and it made good sense to come back here. In the process, I realized that once again, I had things to say. Where I’ve been, where I’m going, what I’ve learned of late. 

And so, here I am. 

At this precise moment, I write to silence. I have a lot of work yet to do, a publication schedule to establish. A shiny new page structure and a banner image are merely a good start.

We’ll see where it goes from here.

Hackathon, Day 2

As I write this, it is the morning of the second day of the Hackathon. I’ve been able to pursue my idea, but, as these things tend to go, it hasn’t played out at all how I’d planned. I spent last night turning over the first day in my mind, and now I find myself reassessing.

Here were the goals I had set for myself before the Hackathon:

1) Nail down the character, items, NPCs, areas and basic mechanics such as accessing inventory, exploring a room, combat, conversation, levelling up and so forth. Mechanics will be governed by functions and may be simplified further if required.

2) Build a simple, linear game using each major element and mechanic at least once.

3) Build a web interface over the game.

4) Playtest.

5) Expand game, if enough time remains

As of this morning, I’m still on point one, with no realistic way to move past it in the time remaining. The first room is in place, as is the player-character, some plot items and a sample creature. I’ve also roughed out the combat function and the examine item function, and identified where I need to write other functions to ensure those two behave as I need them to.

To be honest, I’m not terribly surprised. The game mechanics I was hoping to implement prove more complex to build than is realistic for the timeframe, even with additional simplifications. I’m thinking I’ll have to go back to the drawing board at the end of this and decide what, specifically, I hope to accomplish, and how I might more realistically carry that out.

I’m not particularly upset about all this. I ended up working with only one other person, Jacqui. Her strength is in project planning and documentation, and she ended up putting together an impressively through document mapping out all the object relationships a game like this needs. It’s given me some serious food for thought. While it would have been nice to have an experienced coder working with me, planning of this kind is so important, and I’m no doubt better off for it. I’m still happy with how much the two of us accomplished, given our respective skill sets.

Today, I’m going to keep plugging away at functions. If I’m really productive, I might be able to start testing one or two. Jacqui, alas, is down with a back injury, so I’m flying solo.

While it’s clear that the task is much bigger than I expected, I’d like to keep going for a bit longer before I decide whether it’s worth continuing this development path. I’ll have to decide once the even wraps up. I want to continue with the idea, and I still think it’s going to be a great way to learn more about how JavaScript works.

Hackathon Preparation

With the Hackathon less than a day away, I’ve been gathering my thoughts for my project. I’ve set the following goals for the weekend:

1) Nail down the character, items, NPCs, areas and basic mechanics such as accessing inventory, exploring a room, combat, conversation, levelling up and so forth. Mechanics will be governed by functions and may be simplified further if required.
2) Build a simple, linear game using each major element and mechanic at least once.
3) Build a web interface over the game.
4) Playtest.
5) Expand game, if enough time remains.

I think this is realistic, but I’ll have to see how well it works out. I have no way of knowing who I’ll be working with, or if my project is even going to attract the interest of potential team members.

For posterity’s sake, here’s the rough outline of how I want the game to work (fair warning, this is a bit of an infodump):
Continue reading

The Learning Process in Fits and Starts

As the Hackathon approaches, I’ve been reminded of a few things important to my learning process:

  1. Having something concrete to work on is important to my learning process. Picking up a skill without a clear application or need is lousy way to develop a true understanding of it, at least for me.
  2. It’s okay to walk away for a while, as long as I prioritize coming back. I lost momentum for a week due to Thanksgiving and other demands on my time, and picked my JavaScript lessons back up over this last weekend.
  3. Sometimes it’s best to come back to a problem. This week, I’ve had a string of days where I’ve been stuck on one concept, and instead of beating my head against it, I stopped for the day once it was clear the concept wasn’t gelling. When I came back the next day, I would usually be able to see where I was hung up right away and then move on.
  4. It helps to talk to people already using what I’m trying to learn. A programmer friend was in town last week, and at dinner, he asked me to describe how I was planning to construct my text adventure at the Hackathon. The resulting conversation pointed me toward concepts I wasn’t yet familiar with. When I got to that point in my instruction a few days ago, remembering that conversation really helped me understand the purpose, value and applications of those concepts

It remains to be seen how the Hackathon itself will go. I suspect I understand more than I think I do, but at the same time I’m certain there are still huge gaps in my knowledge. The experience will educate me in both, and likely point me toward a few more things beside.

Meetups and Hackathons

I’m still plodding along in Codecademy; it’s been busy enough that while I’ve been able to set aside a bit of time daily for it, I’m still approaching the midpoint of the JavaScript modules and have yet to delve as deeply as I’d like in the book I picked up.

However, I found a local JavaScript community meeting this week, Exchange.js, and attended one of their meetings yesterday. I’ve got a way to go before I can follow much of what they’re discussing, but the group turned out to be friendly to both newbies and questions. It was soon clear that part of the point of the group is to discuss concepts and implementations that not everyone can expect to be familiar with, so they can find out more about them. I was able to have a number of good conversations once the presentations wrapped, and will be attending again next month.

The meeting drove home just how few and far between women are in programming language-based communities, though – I was one of only three attending, and one of the others was a business owner trying to get a conceptual handle on what her employees are developing. It’s stranger than I expected. I’m curious to see how the web developer community compares, once I’ve seen more corners of it; my initial impression is that women are still outnumbered, but not nearly as badly. Regardless, I’ve yet to feel out of place or unwelcome.

One immediate side-effect of the meeting is that I’ve been persuaded to sign up for an upcoming hackathon, and will be prepping a pitch for it. I’m sticking with a game concept, as I still think it’s a great, scalable, learning exercise. The concept is still rooted in the text adventure, but I’m now wanting to integrate a simplified version of the D&D 4E ruleset, pare down the party to a single-character adventure and perhaps adapt a game module for the gameplay. I’ll lay out the rough design here in a future post.

JavaScript, PHP and Planning my First Solo Project

So JavaScript has been going well. I’ve been going through CodeAcademy’s modules at a satisfying pace.

In particular, it turns out that my background in PHP has been really helpful. The two languages are more similar than they are different:

PHP vs JavaScript

A comparison of basic PHP and JavaScript.

 

And so on. If/else is structured the same way, right down to the curly braces, and I won’t be surprised to find out loops are too.

The choose-your-own-adventure module was a disappointment, though – it was plunked in so early on that it didn’t convey any of what I’d hoped. Instead of the rough outline of a complex set of possible inputs and outcomes, it set up a yes/no scenario and completed. There’s a second one forthcoming that may be a bit more what I have in mind, but I’m not holding my breath at this point.

That said, I’m confident I’ll be ready to try out the more complicated version on my own in another day or so. I’m aiming for an experience similar to Zork, or, if I can get myself to the point of including static graphics, the early King’s Quest games. Here’s what I’m picturing:

An interface in which users are encouraged to type key words and phrases to proceed, based on the context provided.
Multiple outcomes, anticipating two to five possible user inputs, plus a set of fail messages (“I don’t understand ‘popsicle’”) which may include some easter egg type responses for some input.
Testing user responses will require string matching, with the strings run through a change to uppercase or lowercase function to ensure user input can be case insensitive.
Outcomes are loading into a variable when the user has made a valid choice, with one  other designating the encounter. Layered if/else or switch/case statements then pass the user to the appropriate next encounter.
I’d need at least two functions: One to allow users to quit/restart based on a preset keyword, and one to bring up play instructions.

I expect this would give me the complexity I want while being appropriate for my current skill level. As my skill level grows, I can use it as a framework to practice on. I’m aiming to include some HTML5 and CSS3, and an interface permits users to type directly into a field in the page rather than relying on a series of annoying on browser popups.

I’ll make the game in progress available through my University of Alberta homepage.