Jim told me about Trebor Scholz‘s Survey of new media programs. This is a useful resource. There seems to be focus on media studies and art-technology (but not only). Some other lists I know of: Willard McCart’s and Matthew Kirschenbaum’s “Institutional models for humanities computing” (not updated recently though) and “Courses in Cyberculture” by David Silver/RCCS (individual courses, not programs).
Matt Ratto from The Virtual Knowledge Studio for the Humanities and Social Sciences, The Royal Netherlands Academy of Arts and Sciences is speaking in the lab today at13.15 CET.
Already False, Potentially True: epistemic commitments, virtual reality, and archaeological representation
I am very pleased to announce next week’s seminar with Matt Ratto, The Virtual Knowledge Studio for the Humanities and Social Sciences, The Royal Netherlands Academy of Arts and Sciences.
I met Matt at the Cyberinstrastructure Summer Institute at UC San Diego in July, and I was very impressed with his work and his intellectual investment in the humanities and information technology.
On Tuesday this coming week – Dec 12, 1.15 pm CET – he will talk about
Already False, Potentially True: epistemic commitments, virtual reality, and archaeological representation
The (long) abstract follows below. Everyone welcome! The talk will be streamed live and archived. We will also provide a chat room for live interaction.
In this paper I try to answer the question of why few classical archaeologists are embracing 3D computer modeling and simulation when adjacent disciplines and interests (such as scientific archaeology, museums and cultural heritage organizations) have been so engaged and interested in adopting such technologies. In doing so I return to the broader question of the differences between arts, humanities, and social and natural science disciplines, and what Knorr-Cetina calls “epistemic cultures.” To add detail to this notion, I specifically focus in on what I term the epistemic commitments performed by individual scientists and scholars. I use a particular notion of commitment, borrowed from Howard Becker, to address the ways epistemics, or ways of creating, representing, and defending knowledge, can be seen as part of the means by which alignments are made between academic disciplines, the fields of enquiry that they represent, and shared notions about what constitutes valid research. These commitments rely on specific configurations of analytic, representational, and communicative tools, and include both functional and aesthetic choices. I claim that epistemic cultures are constructed and maintained through the epistemic commitments of participating scientists, including their choices of both material and conceptual tools. Further, a focus on the epistemic commitments of scholars and scientists provides the means for linking individual action and larger knowledge traditions and provides a way to denaturalize particular traditions as authoritative.
To illustrate the relationship between technologies and epistemics, I start with a story about a classical archaeologist at the University of Amsterdam. A terracotteri, or specialist in terra cotta materials from the pre-roman period, Dr. Lulof wanted to change how others in her field thought about fascia materials on a particular form of pre-roman temple. Traditionally these materials, which depicted classical mythological scenes, were thought to represent an early form of political propaganda. Non-elites in the society walking by the temple would identify them with elite temple goers and thereby reinforce the power of both the temple and the specific elite in question. However, Dr. Lulof’s opinion was that the scenes on these fascia tiles were not visible from beyond the sacred space that surrounded pre-roman temples of this vintage. Therefore, they could not be seen by individuals who were not elites (who were not allowed to enter the sacred space,) and thus they could not act as propaganda. However, she felt she had no way of testing whether or not the fascia were visible at the requisite distance except by building a one-to-one model of the temple. Hearing of her problem, researchers at SARA, a computer center in Amsterdam that provided computational and visualization resources to academics in the Netherlands, agreed to help her. Using her drawings, the programmers at SARA proceeded to build a virtual version of the temple, complete with fascia, for display within SARA’s CAVE, an immersive 3D environment. Upon completion, Dr. Lulof (with some of her peers,) entered the CAVE, walked to the correct distance, and noted that the fascia images were not recognisable. Before fully writing up her project and findings, Dr. Lulof solicited comments from three distinct groups of archaeology-related scholars and scientists; her peers, including terrcotterie as well as other classical archaeologists; more technically oriented archaeologists, and computer programmers and scientists who specialize in cultural heritage representation. Perhaps surprisingly, all three of these groups rejected her project, albeit for very different reasons.
This paper takes the multiple rejections as indicative of the different epistemic commitments by the various groups involved. While of varying types, these commitments help to trace out the various research objects, modes of representation (aesthetics,) and forms of evidence particular research communities see as valid and necessary to good research. Acknowledging these commitments can help us develop appropriate technologies that help rather than hinder existing research practice, add a layer of reflexivity to researchers’ choices and decisions, and ultimately, facilitate productive cross-disciplinary collaboration.
Yesterday I visited McMaster University in Ontario, Canada. I have a known about the work of Geoffrey Rockwell, Andrew Mactavish and StÃƒ©fan Sinclair for quite some time, but I really feel that this short visit gave me a much better sense of what they do. Some of the McMaster components are the Multimedia Programme, Humanities Media and Computing, the Communication Studies and Multimedia Department, and major projects such as TAPoR. Geoffrey and Andrew told me about the strategies involved and the history, and I am really appreciative of the way this platform has been built, the level of integration between the different parts, and the level of student involvement in these processes.
I gave a talk late in the afternoon, and I quite enjoyed the discussion. The first part – on visualization in the humanities and digital humanities more generally – I have not really had time to frame properly, but the mixed ideas I discussed in the talk will be very useful for the article I am working on. It also seemed that the vision of HUMlab was something that came across (I talked about HUMlab at the end), and I am glad about the interest in space and the interrelation between space and ideas. On this trip, several people have asked about written reflection on these matters, and I really should try to write something up. I think the experiences and histories of existing studio/lab spaces (not only our lab of course) can be very useful for people planning new enterprises. The choices you make (if you are in position to choose yourself) in terms of designing a space are really crucial to what will come out of the spaces and associated ideas. What the article would be about, I think, is also conceptual cyberinfrastructure (a term Matt Ratto used in a question after a talk I did at UC San Diego this summer).
I had several good conversations at McMaster and we will be looking into further collaboration. I also got to see some of the work being done in the TAPoR project. I knew about the project beforehand, but having Geoffrey show it to me was just great. I am very impressed – not least with the more experimental tools and the whole infrastructure. I will write more about this in another post.
Male Restroom Etiquette by Overman at Zarathustra Studios won the 2006 Machinima Award for Best Writing presented by the Museum of the Moving Image in New York on November 5 and 6. It is a clever parody in the style of 1950’s sex education films, which discusses how men should behave in public toilets if “the fabric of civilization is to be maintained”. It was made using a Sims2 game engine and – oh I’m sorry, maybe you don’t know what Machinima is, well in that case you should come along to a short course next Thursday from 13:15-16:00 in HUMlab where Machinima will be discussed, viewed, theorised and even created (a very short segment anyway). The course is titled Mods och Machinima and Interactive Fiction and we will be looking at how tweaking and changing of such digital objects as computer games can result in fiction. While interactive fiction is a huge area, in this course we will be pinning it down with the concept of modding; “the act of modifying a piece of hardware or software to perform a function not intended by someone with legal rights concerning that modification” (Wikipedia). Perhaps there is no more “interactive” a fiction than that which is a product of modding, as the fiction is created through a regime of complete interaction with the already existing source materials used; code, images, sounds, visuals and even lighting and Point of View. Jim Barrett and Stefan Blomberg will be leading the course in a delightful mixture of Swedish and English.If you would like to sign up for this course write to: email@example.com More information on HUMlab short course can be found HERE (In Swedish).
The other day, I attended a course in the lab on concordances and how these can be used for linguistic research and in language education. Jon, who was giving the course, pointed out that the most commonly used concordance tool today probably is Google, and we had an interesting discussion about the consequences of this.
I am sure many of you use Google to check different constructions and compare frequencies; at least this behavior seems to be common among those of us who do not have English as our native language. But how reliable are these results? In some cases the frequency numbers differ so greatly that there is no question that one construction is incorrect. However, sometimes the differences are not as great, and a more thorough analysis of the results is needed. For example, one has to evaluate some of the sources for the hits found (for instance, if you only get a small sample and the majority are from .se addresses, it is likely that we are only dealing with a common Swenglish expression).
One advantage I see with using Google instead of traditional corpora for concordance is that Google captures language at it is used today. We may well find examples of constructions previously considered as incorrect which now appearto be common, and this might lead us to accept that language is constantly changing. Maybe to sometimes split infinitives is not that terrible a crime after all?