How the Web Became Unreadable

I thought my eyesight was beginning to go. It turns out, I’m suffering from design.

It’s been getting harder for me to read things on my phone and my laptop. I’ve caught myself squinting and holding the screen closer to my face. I’ve worried that my eyesight is starting to go.

These hurdles have made me grumpier over time, but what pushed me over the edge was when Google’s App Engine console — a page that, as a developer, I use daily — changed its text from legibleto illegible. Text that was once crisp and dark was suddenly lightened to a pallid gray. Though age has indeed taken its toll on my eyesight, it turns out that I was suffering from a design trend.

There’s a widespread movement in design circles to reduce the contrast between text and background, making type harder to read. Apple is guilty. Google is, too. So is Twitter.

Typography may not seem like a crucial design element, but it is. One of the reasons the web has become the default way that we access information is that it makes that information broadly available to everyone. “The power of the Web is in its universality,” wrote Tim Berners-Lee, director of the World Wide Web consortium. “Access by everyone regardless of disability is an essential aspect.”

But if the web is relayed through text that’s difficult to read, it curtails that open access by excluding large swaths of people, such as the elderly, the visually impaired, or those retrieving websites through low-quality screens. And, as we rely on computers not only to retrieve information but also to access and build services that are crucial to our lives, making sure that everyone can see what’s happening becomes increasingly important.

We should be able to build a baseline structure of text in a way that works for most users, regardless of their eyesight. So, as a physicist by training, I started looking for something measurable.

Google’s App Engine console after — modern, tiny, and pallid

It wasn’t hard to isolate the biggest obstacle to legible text: contrast, the difference between the foreground and background colors on a page. In 2008, the Web Accessibility Initiative, a group that works to produce guidelines for web developers, introduced a widely accepted ratio for creating easy-to-read webpages.

To translate contrast, it uses a numerical model. If the text and background of a website are the same color, the ratio is 1:1. For black text on white background (or vice versa), the ratio is 21:1. The Initiative set 4.5:1 as the minimum ratio for clear type, while recommending a contrast of at least 7:1, to aid readers with impaired vision. The recommendation was designed as a suggested minimum contrast to designate the boundaries of legibility. Still, designers tend to treat it as as a starting point.

Contrast as modeled in 2008

For example: Apple’s typography guidelines suggest that developers aim for a 7:1 contrast ratio. But what ratio, you might ask, is the text used to state the guideline? It’s 5.5:1.

Apple’s guidelines for developers.

Google’s guidelines suggest an identical preferred ratio of 7:1. But then they recommend 54 percent opacity for display and caption type, a style guideline that translates to a ratio of 4.6:1.

The typography choices of companies like Apple and Google set the default design of the web. And these two drivers of design are already dancing on the boundaries of legibility.

It wasn’t always like this. At first, text on the web was designed to be clear. The original web browser, built by Berners-Lee in 1989, used crisp black type on a white background, with links in a deep blue. That style became the default settings on the NeXT machine. And though the Mosaic browser launched in 1993 with muddy black-on-gray type, by the time it popularized across the web, Mosaic had flipped to clear black text over white.

When HTML 3.2 launched in 1996, it broadened the options for web design by creating a formal set of colors for a page’s text and background. Yet browser recommendations advised limiting fonts to a group of 216 “web-safe” colors, the most that 8-bit screens could transmit legibly. As 24-bit screens became common, designers moved past thegarish recommended colors of the ’90s to make more subtle design choices. Pastel backgrounds and delicate text were now a possibility.

Yet computers were still limited by the narrow choice of fonts already installedon the device. Most of these fonts were solid and easily readable. Because the standard font was crisp, designers began choosing lighter colors for text. By 2009, the floodgates had opened: designers could now download fonts to add to web pages, decreasing dependency on the small set of “web-safe” fonts.

As LCD technology advanced and screens achieved higher resolutions, a fashion for slender letterforms took hold. Apple led the trend when it designatedHelvetica Neue Ultralight as its system font in 2013. (Eventually, Apple backed away from the trim font by adding abold text option.)

As screens have advanced, designers have taken advantage of their increasing resolution by using lighter typeface, lower contrast, and thinner fonts. However, as more of us switch to laptops, mobile phones, and tablets as our main displays, the ideal desktop conditions from design studios are increasingly uncommon in life.

So why are designers resorting to lighter and lighter text? When I asked designers why gray type has become so popular, many pointed me to theTypography Handbook, a reference guide to web design. The handbook warns against too much contrast. It recommends developers build using a very dark gray (#333) instead of pitch black (#000).

The theory espoused by designers is that black text on a white background can strain the eyes. Opting for a softer shade of black text, instead, makes a page more comfortable to read. Adam Schwartz, author of “The Magic of CSS,”reiterates the argument:

The sharp contrast of black on white can create visual artifacts or increase eye strain. (The opposite is also true. This is fairly subjective, but still worth noting.)

Let me call out the shibboleth here: Schwartz himself admits the conclusion is subjective.

Another common justification is that people with dyslexia may find contrast confusing, though studies recommenddimming the background color insteadof lightening the type .

Several designers pointed me to Ian Storm Taylor’s article, “Design Tip: Never Use Black.” In it, Taylor argues that pure black is more concept than color. “We see dark things and assume they are black things,” he writes. “When, in reality, it’s very hard to find something that is pure black. Roads aren’t black. Your office chair isn’t black. The sidebar in Sparrow isn’t black. Words on web pages aren’t black.”

Taylor uses the variability of color to argue for subtlety in web design, not increasingly faint text. But Taylor’s point does apply — between ambient light and backlight leakage, by the time a color makes it to a screen, not even plain black (#000) is pure; instead it has become a grayer shade. White coloring is even more variable because operating systems, especially on mobile, constantly shift their brightness and color depending on the time of day and lighting.

This brings us closer to the underlying issue. As Adam Schwartz points out:

A color is a color isn’t a color…
…not to computers…and not to the human eye.

What you see when you fire up a device is dependent on a variety of factors: what browser you use, whether you’re on a mobile phone or a laptop, the quality of your display, the lighting conditions, and, especially, your vision.

When you build a site and ignore what happens afterwards — when the values entered in code are translated into brightness and contrast depending on the settings of a physical screen — you’re avoiding the experience that you create. And when you design in perfect settings, with big, contrast-rich monitors, you blind yourself to users. To arbitrarilythrow away contrast based on a fashion that “looks good on my perfect screen in my perfectly lit office” is abdicating designers’ responsibilities to the very people for whom they are designing.

My plea to designers and software engineers: Ignore the fads and go back to the typographic principles of print — keep your type black, and vary weight and font instead of grayness. You’ll be making things better for people who read on smaller, dimmer screens, even if their eyes aren’t aging like mine. It may not be trendy, but it’s time to consider who is being left out by the web’s aesthetic.

How the Web Became Unreadable.

Source: How the Web Became Unreadable

WSJ Homepage Redesign Process Work – Simply Lisa

WSJ Homepage Redesign Process Work

As part of the WSJ.com site redesign, our process involved figuring out a new responsive grid, a new editorial strategy and an overall rebrand. The ultimate goals were to simplify and increase time on site.

< ![endif]

One of the initial things we did was figure out what our new grid system would be. We had to figure out what screen sizes we were going to design for, what image sizes would work across the various snap points while ensuring that our ad inventory would flow in nicely as well. This is only one sample of approaches we took.

< ![endif]

Through numerous stakeholder interviews, we sorted through a new editorial strategy for the homepage. In this early iteration, we took the existing homepage, broke it out into content chunks and sketched out ways to reconfigure the page in a more digestible way that would across multiple snap points.

< ![endif]

As we did sketches, one of the main underlying approaches we took was to group items modularly. This would allow not only for a simpler development approach for a fully responsive site, but would allow users to scan the page more easily through horizontal bands.

< ![endif]

Often times, visual designs would precede fully fleshed out wireframes. Wires and visual designs were done side-by-side in order to better facilitate communication with key stakeholders and to get buy in on direction early. This approach also helped as we did user testing as well. This design was an early concept of the horizontal banding approach that helped speak to the strategy of grouping certain types of content. The use of color was an idea to begin to cue and train users on certain content types that would ultimately be used throughout the site.

< ![endif]

As the process continued, wireframes evolved into more high fidelity, fully fleshed out concepts using real content to ultimately figure out how a user would navigate through the site and how our editorial stakeholders would program the page.

< ![endif]

As we worked through the content strategy, we evolved the site visually, sorting through interaction details. One main idea we had, in trying to keep with this horizontal approach, was to use strong graphics and photography to draw users down the page.

< ![endif]

One main challenge we faced in this horizontal approach was reconciling our ad requirements. We didn’t want to bring back a traditional right rail but realized we had to keep a remnant of that. As we fleshed this out, we continued to play with the overall look of the brand and how it would work and look across multiple snap points.

< ![endif]

We we progressed, we would hang these responsive posters on the wall so that we could easily reference for discussion.

< ![endif]

As I mentioned, our process involved doing wireframes and visual designs side by side. Depending on where we were in the process, an older visual design might inform a wireframe and vice versa.

< ![endif]

One idea that we had from the beginning that evolved was our use of color. Initially we thought of using color to indicate certain types of content. However, upon seeing a bit of a rainbow effect across the site, we decided instead to take a more minimal approach: keep the site predominantly black & white and have small accents of color. This would in turn allow photos and graphics to be strongly showcased.

< ![endif]

Date: Summer 2013 through August 2014

Client: The Wall Street Journal

Role: Design Lead

< ![endif]

WSJ Homepage Redesign Process Work

< ![endif]

WSJ Live: Video Center

< ![endif]

WSJ.D

< ![endif]

WSJ London 2012

< ![endif]

ABC News iPad App & HTML 5 Site

< ![endif]

ABC News Android & iPhone Apps

< ![endif]

ABC News iPad Elections App

< ![endif]

Newsweek 2009 Redesign

< ![endif]

Newsweek Mobile Optimized Site

< ![endif]

New York Times HTML Slideshows

< ![endif]

New York Times Health Vertical

< ![endif]

Epicurious Marketing Pages

WSJ Homepage Redesign Process Work – Simply Lisa.

Websites Prep for Google’s ‘Mobilegeddon’ – Digits – WSJ

Associated Press

Google is changing its search algorithm Tuesday to favor sites that look good on smartphones, a move some are calling “mobilegeddon” because of its impact on website operators.

The new formula will give a boost in Google’s “organic” search results to sites that are designed to look good on smartphones, while penalizing those that don’t.

Google changes its algorithm frequently, but in this case it took the unprecedented step of warning sites in February that it was coming, and giving them tips on how to prepare.

“Google wants people going mobile friendly,” said Danny Sullivan, founding editor of Search Engine Land. He said the April 21 deadline had created a “panic and frenzy” among Web developers to make the changes.

There’s potential benefit for Google in the move, too. Users are conducting more searches on mobile devices. A Google executive said at a conference last year that smartphone searches could soon outnumber searches from personal computers.

However, advertisers typically pay less for clicks from phones, because they less often lead to sales. Encouraging developers to tailor sites to look good on smartphones should lead to more sales and consequently higher prices for Google’s mobile ads, said Matt Ackley, chief marketing officer of Marin Software, an advertising technology firm.

Google wants developers to make their sites look better on smartphones by tailoring them for small screens, using bigger text and links that are farther apart and easier to tap.

“As people increasingly search on their mobile devices, we want to make sure they can find content that’s not only relevant and timely, but also easy to read and interact with on smaller mobile screens,” a Google spokeswoman said.

Some website operators create special sites just for mobile devices, while others are building a single “responsive” site that adapts to the size of the user’s screen.

Joe Megibow, chief digital officer of American Eagle Outfitters, says his company is preparing to launch a new “responsive” site which will mean it no longer has to maintain two different sites, one for mobile and one for personal computers.

______________________________________________________

For the latest news and analysis,

Get breaking news and personal-tech reviews delivered right to your inbox.

More from WSJ.D: And make sure to visit WSJ.D for all of our news, personal tech coverage, analysis and more, and add our XML feed to your favorite reader.

Websites Prep for Google’s ‘Mobilegeddon’ – Digits – WSJ.

Rhizome Today: The browser lives

Michael Connor | Wed Jan 21st, 2015 1:06 p.m.

This is Rhizome Today for Wednesday, January 20, 2015. This post will be deleted on January 21.

Rafaël Rozendaal, tinycursor.com (2008). Screenshot from Chrome browser on Android device.

It’s well known by this point that, while mobile use rises, the web browser’s popularity is declining. Forbes reported last year that:

While users are spending more time on their devices (an average of 2 hours and 42 minutes per day, up four minutes on the same period last year), how they use that time has changed as well. Only 22 minutes per day are spent in the browser, with the balance of time focused on applications.

While working on our new website, we’ve been talking a lot about what this means for the experience of browser art. A lot of browser art isn’t that fun to look at on a mobile browser. The scale of the thing in relation to your body totally changes, as well as some of the functions that the browser supports. Visiting Rafaël Rozendaal’s tinycursor.com on your mobile device, for example, will give you the message that “this site is not mobile yet 🙁 the following content IS ready for your device,” with a list of links to mobile-ready net art projects.

Some artists do make apps, but that’s a technical vanguard; net art has long been associated with the slightly less technical. Even JODI’s now-legendary first webpage was basically a coding mistake. The technical accessibility of net art is part of what makes it cool.

It seems like most net artists participate in mobile culture by sharing their work and its documentation within other apps–especially Twitter, Tumblr, Instagram, Facebook, Vine, and NewHive–which impose certain constraints. While artists readily embrace these constraints, working in and against them, it’s been interesting to note that browser art still has an important cultural role to play.

This role was noted in Olia Lialina’s must-read essay “Rich User Experience, UX and Desktopization of War,” in which she notes that home page culture is still going strong.

There are a few initiatives right now supporting my observation that home page culture is having a second come back, this time on a structural rather than just visual level.

– neocities.com – free HTML design without using templates.

tilde.club – as the above, plus URLs as an expression of users belonging to a system; and web-rings as an autonomy in hyper linking.

superglue.it – “Welcome to my home page” taken to the next level, by hosting your home page at your actual home.

To that example, I would add the 2013 edition of The Wrong, the digital art biennial (now coming up again in September) which included numerous “pavilions” taking the form of bespoke home pages, such as the Chambers pavilion.

I spent a lot of my 20s in sparsely attended cinemas where 16mm projectors often clattered away noisily. When I moved to London in 2005, it was clear that this seemingly obsolete platform was the site of a vital and contemporary artistic dialogue–in large part because of the curatorial rigor of Ian White and his passionate arguments on behalf of the cinema auditorium as a space of possibility.

It feels to me like the browser may be starting to play a vaguely analogous role in net art culture. This is not because of its technical obsolescence, just because there is a sense that the most heated commercial battles for dollars and eyeballs are taking place on the mobile device’s home screen. By working for the browser and specifically for the desktop, artists may skirt those battles, and participate in the reinvigoration of the browser as a cultural space.

 

Rich User Experience, UX and Desktopization of War

Rich User Experience, UX and Desktopization of War

“If we only look through the interface we cannot appreciate the ways in which it shapes our experience”
— Bolter, Gromala: Windows and Mirrors

Olia Lialina & Dragan Espenschied: “Rich User Experience” from the series With Elements of Web 2.0, 2006

This essay is based on my lecture given at Interface Critique, Berlin University of the Arts, November 7th 2014.

Thank you for hosting me. Today I’m talking as the Geocities Institute’s Head of Research, an advocate for computer users’ rights, and interface design teacher.

RUE

I’ve been making web pages since 1995, since 2000 I’m collecting old web pages, since 2004 I’m writing about native web culture (digital folklore) and the significance of personal home pages for the web’s growth, personal growth and development of HCI.

So I remember very well the moment when Tim O’Reilly promoted the term Web 2.0 and announced that the time of Rich User Experience has begun. This buzzword was based on Rich Internet Applications, coined by Macromedia, that literally meant their Flash product. O’Reilly’s RUE philosophy was also rather technical: The richness of user experiences would arise from of use of AJAX, Asynchronous Javascript and XML.

The web was supposed to become more dynamic, fast and “awesome,” because many processes that users would have to consciously trigger before, started to run in the background. You didn’t have to submit or click or even scroll anymore, new pages, search results and pictures would appear by themselves, fast and seamless. “Rich” meant “automagic” and … as if you would be using desktop software.

As Tim O’Reilly states in September 2005 in blogpost What is Web 2.0?: “We are entering an unprecedented period of user interface innovation, as web developers are finally able to build web applications as rich as local PC-based applications.”

But Web 2.0 was not only about a new way of scripting interactions. It was an opportunity to become a part of the internet also automagically. No need to learn HTML or register a domain or whatever, Web 2.0 provided pre-made channels for self expression and communication, hosting and sharing. No need anymore to be your own information architect or interface designer, looking for a way to deliver your message. In short: no need to make a web page.

Homepage, last modified 1999-07-15 17:43:15, from the Geocities Research Institute collection

The paradox for me at that time was that Rich User Experience was the name for a reality where user experiences were getting poorer and poorer. You wouldn’t have to think about web or web specific activities anymore.

Also, Web 2.0 was the culmination of approximately seven years of neglecting and denying the experience of web users—where experience is Erfahrung, rather than Erlebnis. So layouts, graphics, scripts, tools and solutions made by naïve users were neither seen as a heritage nor as valuable elements or structures for professional web productions.

That’s why designers of today are certain that responsive design was invented in 2010, mixing up the idea with coining the term, since it was there from the very beginning.

And it also explains why the book Designing for Emotion from the very sympathetic series “books apart” gives advises how to build a project “from human to human” without even mentioning that there is much experience of humans addressing humans on the web that is decades old.

“Guess what?! I got my own domain name!” announces the proud user who leaves Geocities for a better place. – “So if you came here through a link, please let that person know they need to change their link!”

“If you take the time to sign my guest book I will e-mail you in return.” writes another user in an attempt to get feedback. Well, this one might be more of an example for early gamification than emotional design, but this direct human to human communication–something current designers have the largest desire to create–is very strong.

Geocities Research Institute: What Did Peeman Pee On?, Installation, Württembergischer Kunstverein, 2014

A few days ago, my team at the Geocities Research Institute found 700 answers to the question “What did peeman pee on?” Peeman is an animated GIF created by an unknown author, widely used on “manly” neighborhoods of Geocities to manifest disgust or disagreement with some topic or entity, like a sports team, a band, a political party, etc., kind of a “dislike” button.

It isn’t a particularly sophisticated way to show emotions or manifest an attitude, but still so much more interesting and expressive than what is available now: First of all, because it is an expression of a dislike, when today there is only an opportunity to like. Second, the statement lays outside of any scale or dualism: the dislike is not the opposite of a like. Third: it is not a button or function, it works only in combination with another graphic or word. Such a graphic needed to be made or found and collected, then placed in the right context on the page—all done manually.

I am mainly interested in early web amateurs because I strongly believe that the web in that state was the culmination of the Digital Revolution.

And I don’t agree that the web of the 1990’s can just be considered as a short period before we got real tools, an exercise in self-publishing before real self-representation. I’d like to believe that 15 years of not making web pages will be classified as a short period in the history of the WWW.

There are a few initiatives right now supporting my observation that home page culture is having a second come back, this time on a structural rather than just visual level.

  • neocities.com – free HTML design without using templates.
  • tilde.club – as the above, plus URLs as an expression of users belonging to a system; and web-rings as an autonomy in hyper linking.
  • superglue.it – “Welcome to my home page” taken to the next level, by hosting your home page at your actual home.

* * *

I had the chance to talk at the launch of superglue.it at WORM in Rotterdam a month ago. Five minutes before the event, team members were thinking who should go on stage. The graphic designer was not sure if she should present. “I’ve only made icons,” she said. “Don’t call them Icons,” the team leader encouraged her, “call them User Experience!” And his laughter sunk in with everybody else’s.

Experience Design and User Illusion

We laughed because if you work in new media design today, you hear and read and pronounce this word every day. Rich User Experience maybe was a term that kept its proponents and critics busy for some time, but it never made it into mainstream usage, it was always overshadowed by Web 2.0.

With User Experience (UXD, UX, XD) it is totally different:

The vocabulary of HCI, Human Computer Interaction design, that has been only growing since its inception, keeps shrinking since two years.

Forget, input and output, virtual and augmented, focus and context, front-end and back-end, forms, menus and icons. This all is experience now. Designers and companies who were offering web/interface solutions a year ago are now committed to UX. Former university media design departments are becoming UX departments. The word interface is substituted by experience in journalistic texts and conference fliers. WYSIWYG becomes “complete drag and drop experience,” as a web publishing company just informed me in an email advertising their new product.

Source: Elizabeth Bacon, Defining UX, Devise Consulting, 2014-01-28

UX is not new, the term is fully fledged. It was coined by Don Norman in 1993 when he became a head of Apple’s reseach group: “I invented the term because I thought human interface and usability were too narrow. I wanted to cover all aspects of the person’s experience with the system including industrial design graphics, the interface, the physical interaction and the manual.”

Recalling this in 2007, he added: “Since then the term has spread widely, so that it is starting to lose its meaning.” Other prophets are complaining for years already that not everybody who calls themselves “experience designer” actually practices it.

This is business as usual, terms appear, spread, transform, become idioms; the older generation unhappy with the younger one, etc. I don’t bring this up to distinguish “real” and “fake” UX designers.

I’m concerned about the design paradigm that bears this name at the moment, because it is too good at serving the ideology of Invisible Computing. As I argued in Turing Complete User, the word “experience” is one of three words used today referring to the main actors of HCI:

HCI UX
Computer Technology
Interface Experience
Users People

The role of “experience” is to hide programmability or even customizability of the system, to minimize and channel users’ interaction with the system.

“User illusion” was a main principle of interface designers since XEROX PARC, since the first days of the profession. They were fully aware about creating illusions, of paper, of folders, of windows. UX creates an illusion of unmediated natural space.

UX covers holes in Moore’s Law; when computers are still bigger than expected, it can help to shrink them in your head. UX fills awkward moments when AI fails. It brings “user illusion” to a level where users have to believe that there is no computer, no algorithms, no input. It is achieved by providing direct paths to anything a user might want to achive, by scripting the user and by making an effort on audiovisual and aesthetic levels to leave the computer behind.

The “Wake-up Light” by Philips is an iconic object that is often used as an example of what experience design is. It is neither about its look nor interaction, but about the effect it produces: a sunrise. The sunrise is a natural, glorious phenomenon, as opposed to artificial computer effects created from pixels, or, let’s say, the famous rain of glowing symbols from The Matrix. Because an experience is only an experience when it is “natural.”

There is no spoon. There is no lamp.

Source: Philips’ promotional image for Wake-up Light, 2010, lifted from Amazon

When Don Norman himself describes the field, he keeps it diplomatic: “[W]e can design in the affordances of experiences, but in the end it is up to the people who use our products to have the experiences.”—Of course, but affordances are there to align the users’ behaviors with a direct path. So it is not really up to the “people,” but more up to the designer.

One of the world’s most convincing experience design proponents, Marc Hassenzahl, clearly states: “We will inevitably act through products, a story will be told, but the product itself creates and shapes it. The designer becomes an ‘author’ creating rather than representing experiences.”

That’s very true. Experiences are shaped, created and staged. And it happens everywhere:

On vine, when commenting on another user’s video, you are not presented with an empty input form, but are overwriting the suggestion “say something nice.”

Screenshot of vine.co, taken 2015-01-02

On Tumblr, a “close this window” button becomes “Oh, fine.” I click it and hear the UX expert preaching: “Don’t let them just close the window, there is no ‘window,’ no ‘cancel’ and no ‘OK.’ UsersPeople should greet the new feature, they should experience satisfaction with every update!”

Screenshot of tumblr.com, taken 2014-12-28

As the Nielsen Norman Group puts it: “User experience design (UXD or UED) is the process of enhancing user satisfaction by improving the usability, ease of use, and pleasure provided in the interaction between the user and the product.”

Such an experiences can be orchestrated on a visual level: In web design, video backgrounds are masterly used today to make you feel the depth, the bandwidth, the power of a service like airbnb, to bring you there, to the real experience. On the structural level, a good example is how facebook three years ago changed you tool for everyday communication into a tool to tell the story of your life with their “timeline.”

You experience being heard when Siri got a human voice, and an ultimate experience when this voice is calm, whatever happens. (The only thing that actually ever happens is SIRI not understanding what you say, but she is calm!)

You experience being needed and loved when you hold PARO, the most sold lovable robot in the world, because it has big eyes that look into your eyes. And you can pet its nice fur. Though smart algorithms, lifelike appearance and behavior alone wouldn’t suffice to not make users feel like consumers of a manufactured programmable system.

Critics of AI like Sherry Turkle warn that we must see and accept machines’ “ultimate indifference,” but today’s experience designers know how to script the user to avoid any gaps in the experience. There is no way to get out of this spectacle. When PARO is out of battery, it needs to be charged via a baby’s dummy plugged into its mouth. If you posses this precious creature, you experience its lifelines even when it is just a hairy sensors sandwich.

Source: PARO Robots, Robo Japan 2008 exhibition

This approach leads to some great products on screen and IRL, but alienates as well. Robotics doesn’t give us a chance to fall in love with the computer if it is not anthropomorphic. Experience design prevents from thinking and valuing computers as computers, and interfaces as interfaces. It makes us helpless. We lose an ability to narrate ourselves and—going to a more pragmatic level—we are not able to use personal computers anymore.

We hardly know how to save and have no idea how to delete. We can’t UNDO!

* * *

UNDO was a gift from developers to users, a luxury a programmable system can provide. It became an everyday luxury with the first GUI developed at Xerox and turned into a standard for desktop operating systems to follow. Things changed only with the arrival of smart phones: neither Android nor Windows phone nor Blackberry provide a cross-application alternative to CTRL+Z. iPhones offer the embarrassing “shake to undo.”

What is the reasoning of these devices’ developers?

Not enough space on the nice touch surface for undo button; the idea that users should follow some exact path along the app’s logic, which would lead somewhere anyway; the promise that the experience is so smooth that you won’t even need this function.

Should we believe it and give up? No!

There are at least three reasons why to care about UNDO:

  1. UNDO is one of very few generic (“stupid”) commands. It follows a convention without sticking its nose into the user’s business.
  2. UNDO has a historical importance. It marks the beginning of the period when computers started to be used by people who didn’t program them, the arrival of the real user, and the naive user. The function was first mentioned in the IBM research report Behavioral Issues in the Use of Interactive Systems: They outlined the necessity to provide future users with UNDO: “the benefit to the user in having—even knowing—of a capability to withdraw a command could be quite important (e.g, easing the acute distress often experienced by new users, who are worried about ‘doing something wrong’).”
  3. UNDO is the border-line between the Virtual and the Real World everybody is keen to grasp. You can’t undo IRL. If you can’t undo it means you are IRL or on Android.

* * *

Commands, shortcuts, clicks and double clicks … not a big deal? Not an experience?

Let me leave you with this supercut for a moment:

These are us, people, formerly known as users, wildly shaking their “magic pane of glass” to erase a word or two, us crying out to heaven to “undo what hurts so bad.” Us bashing hardware because we failed with software.

We are giving up our last rights and freedoms for “experiences,” for the questionable comfort of “natural interaction.” But there is no natural interaction, and there are no invisible computers, there only hidden ones. Until the moment when, like in the episode with The Guardian, the guts of the personal computer are exposed.

In August 2013, The Guardian received an order to destroy the computer on which Snowden’s files were stored. In mass media we saw explicit pictures of damaged computer parts and images of journalists executing drives and chips and heard Guardian’s Editor in Chief saying: “It’s harder to smash up a computer than you think.” And it is even harder to accept it as a reality.

For government agencies, the destruction of hardware is a routine procedure. From their perspective, the case of deletion is thoroughly dealt with when the media holding the data is physically gone. They are smart enough to not trust the “empty trash” function. Of course the destruction made no sense in this case, since copies of the files in question were located elsewhere, but it is a great symbol for what is left for users to do, what is the last power users have over their systems: They can only access them on the hardware level, destroy them. Since there is less and less certainty of what you are doing with your computer on the level of software, you’ll tend to destroy your hard drive voluntarily every time you want to really delete something.

Source: Frank da Cruz: Programming the ENIAC, 2003

Classic images of the first ever computer ENIAC from 1945 show a system maintained by many people who rewire or rebuild it for every new task. ENIAC was operated on the level of hardware, because there was no software. Can it be that this is the future again?

Source: Protodojo: RoboTouch iPad Controller, 2011-08-21

In 2011, 66 years after ENIAC, ProtoDojo showcased a widely celebrated “hack” to control an iPad with a vintage NES video game controller. The way to achieve this was to build artificial fingers, controlled by the NES joypad, to touch the iPads surface; modifying the hardware from the outside, because everything else, especially the iPad’s software, is totally inaccessible.

Every victory of experience design: a new product “telling the story,” or an interface meeting the “exact needs of the customer, without fuss or bother” widens the gap in between a person and a personal computer.

The morning after “experience design:” interface-less, desposible hardware, personal hard disc shredders, primitive customization via mechanical means, rewiring, reassembling, making holes into hard disks, in order to to delete, to logout, to “view offline.”

* * *

Having said that, I’d like to add that HCI designers have huge power, and seem unaware about it often. Many of those who design interfaces never studied interface design, many of those who did didn’t study its history, never read Alan Kay’s words about creating the “user illusion,” didn’t question this paradigm and didn’t reflect on their own decisions in this context. And not only interface designers should be educated about their role, but it should be discussed and questioned which tasks can be delegated to them in general. Where are the borders of their responsibilities?

Combat Stress and The Desktopization of War

Michael Shoemaker: MQ-9 Reaper training mission from a ground control station on Holloman Air Force Base, N.M., 2012

In 2013, Dr. Scott Fitzsimmons and MA graduate Karina Sangha published the paper Killing in High Definition. They rose the issue of combat stress among operators of armed drones (Remote Piloted Aircrafts) and suggested ways to reduce it. One of them is to Mask Traumatic Imagery.

To reduce RPA operators’ exposure to the stress-inducing traumatic imagery associated with conducting airstrikes against human targets, the USAF should integrate graphical overlays into the visual sensor displays in the operators’ virtual cockpits. These overlays would, in real-time, mask the on-screen human victims of RPA airstrikes from the operators who carry them out with sprites or other simple graphics designed to dehumanize the victims’ appearance and, therefore, prevent the operators from seeing and developing haunting visual memories of the effects of their weapons.

I had students of my interface design class read this paper. I asked them to imagine what this masking could be. After hesitation to even think in this direction, their first draft were alluding to the game SIMS:

Of course the authors of this paper are not ignorant or evil. A paragraph below the quoted one they state that they’re aware that their ideas could be read as advocacy for a “play station mentality,” and note that RPA operators don’t need artificial motivation to kill, they know what they are doing. To sum it up, there is no need for a gamification of war, it is not about killing more but about feeling fine after the job is done.

I think that this paper, its attitude, this call to solve immense psychiatric task on the level of the interface made me see HCI in a new light.

Since the advent of the Web, new media theoreticians were excited about convergence: you have the same interface to shop, to chat, to watch a film … and to launch weapons, I could continue now. It wouldn’t be really true, drone operators use other interfaces and specialized input devices. Still, as on the image above, they are equipped with the same operating systems running on the same monitors that we use at home and the office. But this is not the issue, the convergence we can find here is even more scary: the same interface to navigate, kill and to cure post traumatic stress.

Remember Weizenbaum reacting furiously to Colby’s plans of implementing the Eliza chatbot in actual psychiatric treatments? He wrote: “What must a psychiatrist think he is doing while treating a patient that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter.” Weizenbaum was not asking for better software to help curing patients, he was rejecting the core idea to use algorithms for this task. It is an ethical rather than a technical or design question, just like the masking of traumatic imagery is now.

If we think about the current state of the art in related fields, we see on the technological level everything is already in place for the computer display acting as a gun sight and at the same time as a psychotherapist coach.

  • There are tests to cure PTSD in virtual reality, and studies that report about successes. So there is believe in VR’s healing abilities.
  • There are a lot of examples around in gaming and mobile apps proving that the real world can be augmented with generated worlds in real time.
  • There is experience in simplification of the real—or rather too real—images, like in the case of airport body scanners.24
  • And last but not least there is a tradition of roughly seven years of masking objects, information and people on Google Maps: This raises the issue of banalization of masking as a process. For example, to hide military bases, Google’s designers use the “crystallization” filter, known and available to everyone, because it is a default filter in every image processing software. So the act of masking doesn’t appear as an act that could rise political and ethical questions, but as one click in Photoshop.

Those preconditions, especially the last one, made me think that something more dangerous than the gamification of war can happen, namely the desktopization of war. (It has already arrived on the level of commodity computing hardware and familiar consumer operating systems.) It can happen when experience designers will deliver interfaces to pilots that would complete the narrative of getting things done on your personal computer; to deliver the feeling that they are users of a personal computer and not soldiers, by merging classics of direct manipulation with real time traumatic imagery, by substituting the gun sight with a marquee selection tool, by “erasing” and “scrolling” people, by “crystallizing” corpses or replacing them with “broken image” symbols, by turning on the screen saver when the mission is complete.

We created these drafts in the hope of preventing others from thinking into this direction.

Eraser Tool by Madeleine Sterr

Screen Saver by Monique Baier

Augmented Reality shouldn’t become Virtual Reality. On a technical and conceptual level, interaction designers usually follow this rule, but when it comes to gun sights it must become an ethical issue instead.

Experience designers should not provide experiences for gun sights. There should be no user illusion and no illusion of being a user created for military operations. The desktopization of war shouldn’t happen. Let’s use clear words to describe the roles we take and the systems we bring to action:

War UX HCI
Gun Technology Computer
Gun Sight Experience Interface
Soldiers People Users

* * *

I look through a lot of old (pre RUE) homepages every day, and see quite some that are made to release stress, to share with cyberspace what the authors can’t share with anybody else, sometimes it is noted that they were created after direct advice of a psychoterapist. Pages made by people with all kinds of different backgrounds, veterans among them. I don’t have any statistics about if making a home page ever helped anybody to get rid of combat stress, but I can’t stop thinking of drone operators coming back home in the evening, looking for peeman.gif in collections of free graphics, and making a homepage.

They of course should find more actual icons to pee on. And by any means tell their story, share their experiences and link to pages of other soldiers.

Olia Lialina, January 2015

Rich User Experience, UX and Desktopization of War.