The Sims as Narrative Engine


A scene from The Awakening II

I thought I would provide a little background and context for my seminar ‘The Sims as Narrative Engine’ (25th October 2005 1:15pm CET).

First what is Machinima, you need look now further than the all powerful wikipedia.

Why is machinima interesting?
I think it is interesting because it provides us with a modified form of a computer game that clearly tells one or more stories. This is the primary focus of my work with machinima, the point at which a narrative emerges out of the game play. There are several ways this is done and this is what I will be talking about next Tuesday.

Can we see some machinima films?
Oh YES YES YES! There are thousands of them on the net.
I recommend (because it will be the films I concentrate on in the seminar) The Awakening films by April Hoffman. They can be download or streamed from here.
But Hoffman is by no means alone in the machinima film sphere. It is huge! I recommend these gems

Rooster Teeth’s The Strangerhood downloadable here. It was also made using The Sims.

The Codex. Which was made using Halo 2

A Few Good G Men. Made using Half Life 2. The inspiration for which you may recognise.

Finally the World of Warcraft universe has given us many films

The site will probably provide for any further queries you have until next Tuesday in HUMlab. See you there.

upcoming HUMlab seminar

October 25, 2005. 1:15 pm.

All HUMlab seminars are live streamed and archived. We also provide mp3 versions of the seminars (through a podcast).

The Sims as Narrative Engine
Jim Barrett, Ph.D. Candidate, Department of Modern Languages/HUMlab

How do stories get told with the world’s most popular P.C. game, The Sims? One way is through Machinima animation where visual content from game play is arranged into animated films, many of which are of very high quality, featuring advanced editing techniques voice overs, sound effects, and soundtrack music. But what is interesting is that when a machinima film is being made in a Sims world the “actors” cannot be told what to do but rather scenes are set up and the film maker(s) wait for what they want to happen to happen. Or maybe it does not happen and the story line changes as a result of the algorithms driving the software. This is a form of film making removed somewhat from traditional practice and one which raises many issues regarding the role of narrative in computer game play.

This presentation discusses what is machinima film? How does a story get ‘constructed’ in a machinima film? What are the processes of production between game and film? What are the roles taken up by the player/film maker? Is machinima marginal game play or has it entered into the texture of the game itself? How are the various levels of game presentation (game rules, code, architecture, community) implicated in machinima narrative production? How does the future look for machinima film as a genre?

The machinima films The Awakening I and II by April Hoffman will be the main examples of machinima look at in this presentation. Discussion and questions will be welcome throughout the presentation.

My Garden of Forking Paths.

Yesterday I had my first seminar presentation as a PhD student working
between the Department of Modern Languages and HUMlab. It went well and I
enjoyed the discussion and tips from my supervisor and colleagues. Having
done little but read course texts for the last 12 months (mostly literature
from Thomas More to J. M Coetzee, but also substantial theory) and struggle
with my writing style, it feels good to have made my first moves into the
research part of my project.

Moving from undergraduate to graduate education is a tremendous change. To
go from consuming and processing information to having a critical eye and an
opinion has been a difficult but exciting journey. I am still on my way
(maybe always will be) but gradually I believe I am navigating towards a
more self-determined academic voice in my writing. I am beginning to see
similarities and continuities in what once seemed like random patterns with
language or cultural artefacts. Reading everything in a text and copying
long sections slavishly is now becoming reading once quickly and then
re-reading a second or third time if the text is of value and notes are
becoming my own strange system of calligraphy. Unlike undergraduate work
were projects are undertaken following or preceding teaching or group work,
graduate work is a long series of discussions, arguments and seminars while
what could be called ‘individual work’ is going on (if there is such a
thing). I have never felt alone as a PhD student, at times far from it, with
an accessible community existing around me locally and even internationally.

My seminar yesterday was the latest event in this process. A two hour
discussion around things I am very interested in with helpful suggestions
and critical observations. I have formulated a corpus of digital texts that
could be described as hypertexts or cybertexts and collected a
representative sample of quotes from digital theorists and practitioners I
admire or subscribe to. Following this were about 25 questions I had hit
upon as I have read about and played with the digital artefacts from the
corpus. Of course this is all way too broad and I am in need of focus in
just about every area of the project BUT the focus is coming. For three days
after I met Katherine Hayles in HUMlab (21-22 September 2005) I was thinking
about how I really had no idea what my goal was in this “narrative and
reader in digital texts” thing I had managed to get a Wallenberg scholarship
for. What the hell was I going to do? Between Hayles in HUMlab and my
seminar yesterday (apart from time spent with newborn son…well even a little
bit then was I thinking about digital textuality…mainly during diaper
changes) I tried to ‘clean up’ my mind in regards to intentions and aims
with my PhD. The seminar yesterday seems to have established my tangent for
the next few months and it seems like I have a goal in (just slightly out of
focus) sight.

museums-new media and participatory design-edutainment games

We usually mostly report about English-language seminars here but I thought I would do an entry about the two most recent seminars in HUMlab – both in Swedish. Two weeks ago Henrik Summanen from The Museum of National Antiquities (Historiska museet) visited us. He talked about new media och museums – based on solid experience in the field (and knowledge about the world museums), projects he has been involved and ideas about the museum, the role of technology and many other things things. Henrik is also a well-known name in live roleplaying here in Sweden and he also touched on how that feeds into his museum work. An excellent presentation and I am glad so many students from the museology program were there.


The streamed version of Henrik Summanen’s seminar can be found here and the mp3-version is available from here.

Yesterday Karin Danielsson from the Informatics Department here at Umeå University talked about her research and she focused on the participatory design process in relation to edutainment games. Among other things, she told us about how she has been involved with the development of two games/case studies: His and hers and Rixdax. There was a good mix of people present (design institute, media studies, teacher education faculty, center for educational technology, university library, modern langauges, engineering programs and others) and Karin did a great job. The discussion afterwards focused not least on methodological aspects and it is apparent that many researchers and students (from different parts of the unviersity) are faciing similiar challenges.


The streamed version of Karin Danielsson’s seminar is available from here.

Two really interesting seminars based on reflection, analysis and rich case studies/projects – highly recommended for those of you who understand Swedish.

seminar with Jim Barrett

Tomorrow HUMlab and Department of Modern languages Ph.D. student Jim Barrett will present his thesis work (so far) at a seminar at the Modern English department.

11th October 2005
B207b 15:00-17:00
(departemental seminar that will not be streamed but I think everyone is welcome)

It seems as if Jim will discuss the the following electronic texts in the seminar РFtrain, Fa̤ade, Last Meal Requested, Twelve Blue: Story in Eight Bars, The Book of Going Forth, Alleph and Dreamphage Рas well as theory, approaches and other things. Read more here.

Life between the buildings, the pilot study

I have been meaning to blog this for ages, but have been to busy playing with the data to really get anything out in writing. Lilia, Anjo and I have been working on the pilot study portion of our paper, Life between the buildings: An approach for defining a weblog community (pdf on wrong computer, will add in the morning). The journey through the pilot study has been an interesting one. We have learned many programs, defined many methodology problems, and finally begun to form a way to (hopefully) define weblog communities. To me, an important distinction in this paper is between weblog networks and weblog communities. Networks are (somewhat) easier to define in that they can be shown mathematically. This person links to this person who links to these people and in turn, these people link back to these people…so on and so forth. Linking, however, does not a community make! Community definition takes more than archeology…it also takes quite a bit of ethnography. Anyone can have connections…I have around 300 links in my rss (functions as my blogroll). I do not, however, reside in a community of these bloggers (virtually residing, that is). In order to define a weblog community one must delve into the way that these bloggers maintain their network structure. You need to look at different measures such as how many different types of connections do they maintain with a person (back channel communication, face to face meeting, partnerships, etc.) and how they communicate with each other. Taken in connection with the number and type of links mined from weblog entries over time, interesting pictures begin to emerge.

Light green is me(Lilia)
Blue – KM blogs
Red – educational blogs
Orange – internet research blog
Green – A-list
Grey – all not coded
picture via Lilia’s flickr
Continue reading “Life between the buildings, the pilot study”

Photoshop and Painter

So, Corel Painter IX, or Photoshop CS?

I’ve been an avid Photoshop user for many, many years. In fact, I’ve used the program ever since I first started out painting on the computer. I have to admit, right now, that I am a little biased towards it – for all that Photoshop has crashed many a time and devoured a whole lot of hours’ worth of work in doing so… it has been a loyal companion ever since I first learned its interface. The interface, the functions, the details of the program – these things don’t get in the way of my workflow.

What made me stick to Photoshop even after I had been introduced to other programs was its simplicity. Yes, it’s a huge program – there are tons of filters and massive amounts of effects… but the actual painting tools… you only use that one pencil, and you simply switch between different brushes – some of your own making and some that came with the program. A brush switch is two clicks away. I go from tiny little sharp brush to huge texture brush in a split second, using my wacom pen’s buttons to access the menus – it just can’t get a whole lot simpler than that. I flip the pen over, and whoah, the eraser is on the other side. The tools work the same way at all times, I have absolute control over every brush stroke, every setting, and it’s all so simple. I’ve built up a routine over the years, a way to get down and dirty with the details: I’m a perfectionist, and I adore how I have complete control down to every last pixel if I choose it. It’s a very, very precise tool.

And, like I said – simple.

That’s what made me cling to the program. I love it.

Now… Painter… let’s just say it’s everything but simple.

I have admittedly tried to pick it up over the course of many years, but always ended up baffled by the multitudes of brushes, pens, pencils, watercolours, erasers (there’s not just one eraser, folks!), and all possible, neat, awesome functions they could think of cramming into this one program. Painter isn’t just complicated – it’s overwhelming. For someone who is used to switching brushes with two clicks of her pen, Painter with all its menus and all its settings seemed really, really intimidating.

The truth is, it gets easier after a while. The tools are wonderful – they’re stunningly clever and well thought out. You can achieve effects in Painter within a matter of moments that takes a Photoshop artists hours to figure out. I still couldn’t believe it the first time I used the watercolours and the paint actually ran in front of my eyes. It slowly dripped down towards the bottom of the screen, thinning out as it expanded. It was awesome. Any artistic media you can imagine can be found here (well, almost anyway). Everything from crayons to oils, and these tools work convincingly. It looks like you’re using real paints.

So. Photoshop – simple. Painter – glamorous. That’s how I see it anyway. It all depends on what you’re looking for. For me, actually painting in Painter is a hassle. There are too many options. My workflow in the program is uneven and I end up flipping between different tools, fiddling with the settings and the options. I want a magic brush like the Photoshop one that I can change with two clicks into becoming anything I want it to be. When I lay down the foundations for an image, I want to be able to work fast.

What I love to do in Painter, is perfect. Add texture, add life, stir the kettle a little – what I create in Photoshop tends to be… a little too precise and perfect. Brushing over my smooth surfaces with a rough pencil in Painter might just be the best way to bring back some energy into a painting.

My honest opinion is – use both programs. If you can only afford one, give much thought to what you want out of it… they offer very valuable, but different, options. I’m tired of people claiming one is better than the other – that’s not true. They’re equally awesome programs.

However, they’re good for different things. That, I can’t dispute.

the importance of technology

Today I did a presentation of the lab for a group of visiting people with a marked interest in the technology side of things. When preparing the presentation I realized that I most often do not go into great depth as far as the technology is concerned. It was fun thinking about technical setup in this way again. Of course the technology is an integral part of the lab and often difficult to separate from content, use and vision. I tried to bring things together with a more pronounced technology focus than I normally have. It is kind of obvious that the technology is important to HUMlab (for anyone who has been in the lab at least) but I really think it is worth stressing the importance of the technological setup, access to technology, willingness to experiment, combining making and thinking, and having a technology-rich studio space (where technology does not merely become an object of study or a tool).

We take great care thinking about the integration of technology into our environment and it is something I personally find extremely important. You need to pay attention to the wholeness of the lab and to seemingly small things. For instance, We/I have always been concerned about the noise level of the equipment we buy. That is why we have quite a few sound-proof boxes. Right now we are looking into buying new PC workstations and we recently borrowed a computer to check the noise level of it. The computer we borrowed had a very good professional video card card installed and it was rather noisy (to say the least). Since this is such an important issue we bought a card of the type that we intend to use with the new workstations, and yesterday we tested it with the borrowed computer and the difference was remarkable. This is also a good video card but much more silent. Finding that out made me happy but also the process of taking the old card out, installing the new one, testing etc. (admittedly I did not do everything myself – my colleagues in the lab did most of the work).