Team Dick

Xanadu

Posted by @ 11:05 AM on May 27, 2011

“A user interface should be so simple that a beginner in an emergency can understand it within ten seconds.”

Ted Nelson as explained in the anime series Serial Experiments Lain.

Ted Nelson

Ted Nelson is something of a geek philosopher. He thinks a lot about how to simplify computers and more specifically our interactions with them. He’s had quite a lot to say about the subject and he certainly has the authority to say such things. He was one of the original old-world geeks that sat around thinking up the (then) radical ideas that would evolve into the web. Now he’d like us all to know that his bastard child, the web,  is wrong.

(Note: Olivia Newton-John will have nothing to do with this post.)

A Bit Of History

Vannevar Bush as explained in the anime series Serial Experiments Lain.

In the 1940s Vannevar Bush developed a theoretical system he called Memex. The Memex system described, for the first time, many objects and functions we use daily such as personal computers, speech recognition, and the idea of linking documents together such that readers could easily access related documents. This document linking concept was refined by Ted Nelson into what he called “hypertext“.

Hypertext (the HT in HTTP) is the idea of having text in one document link to text in another document in such a way that a user can quickly access the linked content. Web page links, like those in this blog post, are examples of hypertext and are the very core of what makes the web work.

In 1960 Ted Nelson founded Project Xanadu to develop the first hypertext system, however the first release of Xanadu wouldn’t come for 40 years. During that time Nelson wrote extensively about the concepts behind the system. It was Nelson’s concepts that helped shape the design of the web, but it was the web that beat Xanadu to the punch and became the de-facto document sharing mechanism of the internet.

Undeterred, Nelson has continued to develop and refine his concepts into a standard he calls Transliterature. When Nelson compares the current web to his transliterature ideas he finds the web woefully lacking.

How The Web Is Wrong

The Transliterature standard is worth reading. The first half is easily digestible and explains the various concepts and how they relate to the current web and would make it a better place to be. There are many specific points Nelson makes that can be grouped into three key topics.

Links Could Be Better

The target of a link may have changed since it was linked. Under Nelson’s system there would be some form of revision control. Links would point to a specific version of a given document. If the document is later edited or deleted the older version would still exist and thus links would always point to the content the author originally intended.

Nelson also wants links to work both ways. If links worked both ways then the user would have a greater ability to put context to the content they are viewing. Users could explore other documents that link to the one they are currently viewing. The content of the linking pages could help add context and meaning to the primary document. It would also allow document authors to see how their content was being used and in what context their content was being placed. Authors could refine their content to help make their message more concise and meaningful if they found others linking to their work in an inappropriate way.

It is implied through Nelson’s work that links would also point to specific sections of content within a referenced document. Rather than linking to the top of a document and relying on users to locate the specific information being referenced, links would take users directly to the referenced text. This feature would make accessing referenced content much easier and would enhance the ability for hypertext browsers to display only the relevant portions of the referenced text while still browsing the source document, greatly improving on the user interface.

Current User Interfaces Are Limited

A primary argument of Nelson’s is that current web browsers try to simulate paper rather than create new interfaces that would provide a more efficient method for humans to consume content. For example, web browsers tend to display only a single page at a time. We can switch between tabs or windows, but each tab or window only displays a single document. Nelson suggests interfaces that show multiple documents within a single window. The primary document the user is viewing might occupy half the window, while the other half might show excerpts or thumbnails of content that both links to and is linked from the currently viewed document. Such a feature would allow users to preview content without having to click on links to access full documents.

Nelson offers few ideas on new interface designs, but rather points to current advances in 3D graphics display technology and suggests new avenues of interface design should be explored with these technologies. In fact a 3D approach is already available. In 2007 XanaduSpace 1.0 was released. It’s mainly a demonstration tool to help convey the basic concepts of the interface and you too, if you have Windows, can try it out.

Ted Nelson demonstrating XanaduSpace.

Transclusion

The last major concept to get is Transclusion. Transclusion basically means to insert a piece of one document into another. Rather than a link that you have to click on to read the material you have the actual material embedded in the document. Entire documents could be just a list of transclusions, a bunch of clips from other documents, to produce a new piece of content. XanaduSpace demonstrates this concept nicely.

In the screenshots above you can see the current document in the foreground. The document is actually just a list of transclusions. XanaduSpace renders the page by resolving the transclusions producing a document of text sourced from other documents. The source documents you can see displayed in the background with lines visually connecting each transclusion in the current document. Were one inclined to do so, one can simply follow these lines back to the source.

Would Nelson’s Ideas Make The Web Better?

From the perspective of document management, absolutely. The problem is platforms like the web were designed with a lack of vision towards how the system would be used and abused, twisted and deformed by the general public. I think Ted Nelson is falling into this trap as well.

Two-Way Link Would Be A Nightmare

The two-way link sounds like a great idea, but it would be very difficult to implement. Identifying an outgoing link from a document is simple since the document itself has a list of all those outgoing links. Incoming links, however, would not be stored in the document as the document’s author would then control what incoming links are revealed allowing the author to shape the context of their content. An independent mechanism is needed to identify incoming links. The simplest such mechanism would be something similar to a search engine that crawls the internet and identifies links between documents. Browsers would then query these link libraries to identify incoming links to the document currently being viewed.

There is an issue of trust with these link libraries. Do you trust the link library your browser queries to give you every incoming link or will the list of incoming links be culled by someone or some process to give you only certain results? This issue already exists with current search engines and the answer seems to be that the industry might police itself with companies vying for users. But then does that mean the incoming links will be sprinkled with paid-for links as well? Afterall, these link libraries would need to make money just like the search engines.

Then comes the ever-present issue of spam. Like spam found in the results of current search engines you can expect there would be spam links. Advertisements that purposely link to the top 10,000 most popular pages. Link library providers would need to apply techniques to limit the amount of spam, but can such a technique be perfect? If we use existing search engines as the example then we can be sure some spam will get through and some legitimate links will be blocked. Is something better than nothing?

The two-way link would be immediately abused were it implemented, turning it into a feature that provides more headache than usefulness, unless it was tightly controlled. But if it’s tightly controlled would readers be getting proper context about the content they’re viewing? Which goes back to trust, which is a whole other blog post in itself. At the very best, there’s no simple answer and at worst it’s a advertiser’s dream and a user’s nightmare.

Transclusion & Copyright

Imagine a document of nothing but transclusions from the official online copy of the U.S. Constitution. The document, when rendered, produced a copyrighted work such as the content of the latest #1 best seller. The rendered content is copyrighted, but the document used to render the text has no such copyrighted content in it. There’s clearly a copyright violation occurring, but where is it? Is it the document with the transclusions? Is it the U.S. Constitution? Or is it in the browser where the document is rendered?

There’s no simple answer. The most obvious of the three seems to be the transclusion list. Think of the transclusion list as a set of instructions on how to make something (in this case something illegal) and let’s start looking at precedent: The Anarchist Cookbook. The book instructs people in all manner of legal and some illegal activity and yet the book itself is perfectly legal to own. It’s the act of creating something based on the instructions when the legal threshold his crossed. If we apply this to transclusion lists then it’s the moment your computer renders the document that something illegal has occurred. The prohibitive nature of prosecuting every individual copyright violation would be impossible. That means new laws will be championed by the copyright industry. Laws that will probably do far more harm (to civil rights) than good.

Improving The Interface

I am in most agreement with Nelson about the need to improve the current browser interface, unfortunately this is probably the hardest of nuts to crack. Good user interface design tries to make things as simple as possible for human beings. The easiest UIs usually build upon interface mechanics humans are already familiar with coupled to the most minimal of required external resources (i.e. hardware; a mouse, keyboard, and monitor). Until brain implants become commonplace browsers aren’t likely to feature thought-controlled navigation; even then it’ll probably try to mimic what we already know (e.g. controlling a mouse cursor).

Take the one example Nelson refers to — the proliferation of powerful graphics cards. Full-fledged 3-D environments were certainly a reality 20 years ago, but rarely featured in mass-market UI design because the typical user didn’t have the thousands of dollars to invest in the hardware. Nowadays even the most economical of computers comes with sufficient graphical capabilities to render complex 3-D environments. A UI demand for a 3D-capable graphic cards can now be considered a reasonable requirement, but bear in mind this is only a relatively recent development. Bleeding-edge UI design theory of today may have to wait another twenty years before its requirements might too be considered reasonable.

Another roadblock to UI improvement is a lack of vision. Most user interface mechanics we’re familiar with today (such as the use of a mouse to control an on-screen cursor) were either happy accidents or were seen as novelties that would never amount to much of anything. As one well respected technology writer commented on computer mice, “There is no evidence that people want to use these things.” Radical advancements in UI design is not just an uphill battle, it’s uphill in a blinding snowstorm over a terrain of epoxy while missiles rain from the sky.

Alright Negative Nancy, how can the web be improved?

I have a couple of ideas. One could be implemented fairly quickly, the other is a bit more pie in the sky and neither is an original idea. However I need an ending to this long-ass blog post so I’ll offer them up.

The web, by its very nature, is difficult to control. There’s no authority with the power to force content authors or site managers to follow guidelines or standards. This is why mechanisms like the two-way link are bound to fail; they only work if everyone follows the rules.

Sub-Webs

It might be possible to exercise control over a limited area of the web. Imagine a group of libraries that come together under a single, governing body. This governing body creates a network that interconnects the libraries. The documents from each library are digitized and put onto their node of this library network. The governing body could issue rules as to how documents and links are maintained that conform to Nelson’s Transliterature idea. However there would be one additional rule that document links could only exist between documents maintained by other members of this network. This drastically limits the scope of the content that network members have to worry about. Forming two-way links would be managed by the governing body. Rules on revision control could be dictated by the governing body. If a network member doesn’t follow the rules they get kicked out.

A new standard could be developed that outlines how to interconnect these sub-webs so that two-way linking might expand outside the network. Perhaps, in time, a majority of the internet winds up in a sub-web from which it can link (via interconnects) to a majority of content on the web. If sub-webs are treated as individual nodes a further layer can be added; a governing body that groups sub-webs together. If a majority of all content on the web was part of a sub-web and all sub-webs became interconnected then such a structure would provide a means to exercise control over the web. Such control is the dream of Nelson and many other technologists.

Of course the downside to this is that content is being controlled. The nature of the current web is very free and open and has allowed for all manner of information to get out that some governing bodies would like to see remain hidden. Which is why, if there ever were such sub-webs, they would probably only exist within academic environments or libraries; the general public will still want a free and open web.

Pie in the Sky

But if I’m shooting for the sky I think something along the idea of what you see Tony Stark use in Iron Man would be the way to go. There are two specific elements that I think would be fantastic advancements in UI.

The first is the presence of an artificial intelligence (Paul Bettany voice optional) that accepts and understands conversational language. Its simplicity is nearly utopian in UI design. Just speak (or sign, or think) what you want and let the AI do the leg work. No need to worry about browser plugins or spam or any other crap we deal with in browsing the web. It’s all sorted out for us by our AI.

The reliance on AI would be a sci-fi writer’s wet dream; quick to investigate a world of human reliance on computer programs for access to information. What happens if the world’s AIs are hacked to hide information and report false information? What if they turn on us? Etc. etc. Some of their points would be valid.

The second element is a display that projects 3-D objects into 3-D space and allows users to interact with it. As much as Nelson wants to see more use of 3-D, his project is still limited in that transition from the 3-D objects generated by the computer to the decidedly 2-D representation on the computer screen. Such an interface, I think, would feel more natural and be more intuitive to users than the staring at an image on a flat screen mechanic you’re using right now.

TL;DR

Nelson’s right in that the web could be improved, but Nelson’s ideas require control be exercised over content placed on the web. That kind of control simply doesn’t exist and Nelson’s ideas would fail horribly in the anarchy of the web. Developing a better UI is a workable problem as it doesn’t require content be controlled, just parsed. However meaningful changes to the how we interact with information on the web are slow and difficult to come by.

Categories: Geek
Tags: , , , ,