Off the Top: RSS Entries
It was a weekend not focussing on technology or internet much, but I saw some usual patterns that are the usual signs that something is not well, but I had other things that took priority of focus. Sunday I gave a quick look and let out that human deep gasp and my kid looked up and asked what was wrong as my head slumped. I was late to the news that Aaron Swartz had taken his life. I don’t know what site I read it but all my screens of services and people I follow closely where sharing the news and their remembrances.
Aaron’s passing was beyond the “he is too young” and “what a shame”, he was not only someone special he was changing the world and had been for quite a while with his sorting through the battles of standardizing RSS, working with the Creative Commons to create a modern equivalent that could be relatively easily used attach to published and shared content allowing much better and more open access to it, he helped contribute to Reddit as it was hatching and stayed with it through to it being purchased. He has done so much more since. But, he died at 26 years old. At 14 when he was partaking in the RSS discussions on listserves nobody knew his age. Nobody had a clue and it wasn’t known until they asked him to come to a gathering to discuss things face to face. This story of his age and the wonderful story about how people found out was spread on listserves I participated in and at around post conference drinks in the early 00s. At the age of 14 Aaron had lore. He had earned the respect and right to be a peer with the early grey beards of the Web that were battling to understand it all, help it work better, and make the world better because of it. Aarron fit right in.
I was at a few events and gatherings that Aaron was at in the early 00s but I never had the chance to work or interact with him. But, I have worked and interacted with many who did have that fortune and even with the lore Aaron had they were impressed by his approach, capabilities, and what he could accomplish. In the tech community it is a meritocracy and you earn credibility by doing. The world has always been changed by those who do, but also by those who are curious. but are guided by an understanding of an optimal right (correct way).
What we lost as a society was not only a young man who earned his place and credibility, and earned it early, but he gave to others openly. All the efforts he put his heart and mind to had a greater benefit to all. The credo in the tech community is to give back more than you take. Aaron did that in spades, which made thinking of a future with whatever he was working on a bit more bright and promising. Aaron’s blog was at the top of my feed reader and was more than worth the time to read. He was a blogger, an open sharer of thoughts and insights, questions as well as the pursuit of the answer to the questions. This is not the human norm, he was a broken one in all the understandings that brings as being outside the mainstream norms, but much like all those in the Apple “Think Different” ad campaign he made a difference by thinking different and being different from those norms.
I love David Weinberger’s Aaron Swartz was not a hacker. He was a builder. as well as his Why we mourn. From the rough edges of hearing friends talk about their working with Aaron and following along with what he shared, we as a society were in for a special future. Doc Searls’ Aaron Swartz and Freecom lays out wonderfully the core of Aaron’s soul as a native to the Net in the virtues of NEA (Nobody owns it; Everybody can use it; Anybody con improve it). Doc also has a great collection of links on his memorial posting, many from those who worked with Aaron or knew him well on his Losing Aaron Swartz page.
Dang it, we are one down. We are down a great one. But, this net, this future, and this society that fills this little planet needs the future we could have had, but now it is ours to work together to build and make great.
How to we get there? Aaron’s first piece of advice from Aaron Swarts: How to get my job is, “Be curious. Read widely. Try new things. I think a lot of what people call intelligence just boils down to curiosity.”
There was a kerfuffle a couple weeks back around Delicious when the social bookmarking service Delicious was marked for end of life by Yahoo, which caused a rather large number I know to go rather nuts. Yahoo, has made the claim that they are not shutting the service down, which only seems like a stall tactic, but perhaps they may actually sell it (many accounts from former Yahoo and Delicious teams have pointed out the difficulties in that, as it was ported to Yahoo’s own services and with their own peculiarities).
Never the less, this brings-up an important point: Redundancy. One lesson I learned many years ago related to the web (heck, related to any thing digital) is it will fail at some point. Cloud based services are not immune and the network connection to those services is often even more problematic. But, one of the tenants of the Personal InfoCloud is it is where you keep your information across trusted services and devices so you have continual and easy access to that information. Part of ensuring that continual access is ensuring redundancy and backing up. Optimally the redundancy or back-up is a usable service that permits ease of continuing use if one resource is not reachable (those sunny days where there's not a cloud to be seen). Performing regular back-ups of your blog posts and other places you post information is valuable. Another option is a central aggregation point (these are long dreamt of and yet to be really implemented well, this is a long brewing interest with many potential resources and conversations).
With regard to Delicious I’ve used redundant services and manually or automatically fed them. I was doing this with Ma.gnol.ia as it was (in part) my redundant social bookmarking service, but I also really liked a lot of its features and functionality (there were great social interaction design elements that were deployed there that were quite brilliant and made the service a real gem). I also used Diigo for a short while, but too many things there drove me crazy and continually broke. A few months back I started using Pinboard, as the private reincarnation of Ma.gnol.ia shut down. I have also used ZooTool, which has more of a visual design community (the community that self-aggregates to a service is an important characteristic to take into account after the viability of the service).
Pinboard has been a real gem as it uses the commonly implemented Delicious API (version 1) as its core API, which means most tools and services built on top of Delicious can be relatively easily ported over with just a change to the URL for source. This was similar for Ma.gnol.ia and other services. But, Pinboard also will continually pull in Delicious postings, so works very well for redundancy sake.
There are some things I quite like about Pinboard (some things I don’t and will get to them) such as the easy integration from Instapaper (anything you star in Instapaper gets sucked into your Pinboard). Pinboard has a rather good mobile web interface (something I loved about Ma.gnol.ia too). Pinboard was started by co-founders of Delicious and so has solid depth of understanding. Pinboard is also a pay service (based on an incremental one time fee and full archive of pages bookmarked (saves a copy of pages), which is great for its longevity as it has some sort of business model (I don’t have faith in the “underpants - something - profit” model) and it works brilliantly for keeping out spammer (another pain point for me with Diigo).
My biggest nit with Pinboard is the space delimited tag terms, which means multi-word tag terms (San Francisco, recent discovery, etc.) are not possible (use of non-alphabetic word delimiters (like underscores, hyphens, and dots) are a really problematic for clarity, easy aggregation with out scripting to disambiguate and assemble relevant related terms, and lack of mainstream user understanding). The lack of easily seeing who is following my shared items, so to find others to potentially follow is something from Delicious I miss.
For now I am still feeding Delicious as my primary source, which is naturally pulled into Pinboard with no extra effort (as it should be with many things), but I'm already looking for a redundancy for Pinboard given the questionable state of Delicious.
The Value of Delicious
Another thing that surfaced with the Delicious end of life (non-official) announcement from Yahoo was the incredible value it has across the web. Not only do people use it and deeply rely on it for storing, contextualizing links/bookmarks with tags and annotations, refinding their own aggregation, and sharing this out easily for others, but use Delicious in a wide variety of different ways. People use Delicious to surface relevant information of interest related to their affinities or work needs, as it is easy to get a feed for not only a person, a tag, but also a person and tag pairing. The immediate responses that sounded serious alarm with news of Delicious demise were those that had built valuable services on top of Delicious. There were many stories about well known publications and services not only programmatically aggregating potentially relevant and tangential information for research in ad hoc and relatively real time, but also sharing out of links for others. Some use Delicious to easily build “related information” resources for their web publications and offerings. One example is emoted by Marshall Kirkpatrick of ReadWriteWeb wonderfully describing their reliance on Delicious
It was clear very quickly that Yahoo is sitting on a real backbone of many things on the web, not the toy product some in Yahoo management seemed to think it was. The value of Delicious to Yahoo seemingly diminished greatly after they themselves were no longer in the search marketplace. Silently confirmed hunches that Delicious was used as fodder to greatly influence search algorithms for highly potential synonyms and related web content that is stored by explicit interest (a much higher value than inferred interest) made Delicious a quite valued property while it ran its own search property.
For ease of finding me (should you wish) on Pinboard I am http://pinboard.in/u:vanderwal
Good relevant posts from others:
The idea of a tag "As If Had Read" started as a riff off of riffs with David Weinberger at Reboot 2008 regarding the "to read" tag that is prevalent in many social bookmarking sites. But, the "as if had read" is not as tongue-in-cheek at the moment, but is a moment of ah ha!
I have been using DevonThink on my Mac for 5 or more years. It is a document, note, web page, and general content catch all that is easily searched. But, it also pulls out relevance to other items that it sees as relevant. The connections it makes are often quite impressive.
My Info Churning Patterns
I have promised for quite a few years that I would write-up how I work through my inbound content. This process changes a lot, but it is back to a settled state again (mostly). Going back 10 years or more I would go through my links page and check all of the links on it (it was 75 to 100 links at that point) to see if there was something new or of interest.
But, that changed to using a feedreader (I used and am back to using Net News Wire on Mac as it has the features I love and it is fast and I can skim 4x to 5x the content I can in Google Reader (interface and design matters)) to pull in 400 or more RSS feeds that I would triage. I would skim the new (bold) titles and skim the content in the reader, if it was of potential interest I open the link into a browser tab in the background and just churn through the skimming of the 1,000 to 1,400 new items each night. Then I would open the browser to read the tabs. At this stage I actually read the content and if part way through it I don't think it has current or future value I close the tab. But, in about 90 minutes I could triage through 1,200 to 1,400 new RSS feed items, get 30 to 70 potential items of value open in tabs in a browser, and get this down to a usual 5 to 12 items of current or future value. Yes, in 90 minutes (keeping focus to sort the out the chaff is essential). But, from this point I would blog or at least put these items into Delicious and/or Ma.gnolia or Yahoo MyWeb 2.0 (this service was insanely amazing and was years ahead of its time and I will write-up its value).
The volume and tools have changed over time. Today the same number of feeds (approximately 400) turn out 500 to 800 new items each day. I now post less to Delicious and opt for DevonThink for 25 to 40 items each day. I stopped using DevonThink (DT) and opted for Yojimbo and then Together.app as they had tagging and I could add my context (I found my own context had more value than DevonThink's contextual relevance engine). But, when DevonThink added tagging it became an optimal service and I added my archives from Together and now use DT a lot.
Relevance of As if Had Read
But, one of the things I have been finding is I can not only search within the content of items in DT, but I can quickly aggregate related items by tag (work projects, long writing projects, etc.). But, its incredible value is how it has changed my information triage and process. I am now taking those 30 to 40 tabs and doing a more in depth read, but only rarely reading the full content, unless it is current value is high or the content is compelling. I am acting on the content more quickly and putting it into DT. When I need to recall information I use the search to find content and then pull related content closer. I not only have the item I was seeking, but have other related content that adds depth and breath to a subject. My own personal recall of the content is enough to start a search that will find what I was seeking with relative ease. But, were I did a deeper skim read in the past I will now do a deeper read of the prime focus. My augmented recall with the brilliance of DevonThink works just as well as if I had read the content deeply the first time.
Many of the social web services (Facebook, Pownce, MySpace, Twitter, etc.) have messaging services so you can communication with your "friends". Most of the services will only ping you on communication channels outside their website (e-mail, SMS/text messaging, feeds (RSS), etc.) and require the person to go back to the website to see the message, with the exception of Twitter which does this properly.
Here is where things are horribly broken. The closed services (except Twitter) will let you know you have a message on their service on your choice of communication channel (e-mail, SMS, or RSS), but not all offer all options. When a message arrives for you in the service the service pings you in the communication channel to let you know you have a message. But, rather than give you the message it points you back to the website to the message (Facebook does provide SMS chunked messages, but not e-mail). This means they are sending a message to a platform that works really well for messaging, just to let you know you have a message, but not deliver that message. This adds extra steps for the people using the service, rather than making a simple streamlined service that truly connects people.
Part of this broken interaction is driven by Americans building these services and having desktop-centric and web views and forgetting mobile is not only a viable platform for messaging, but the most widely used platform around the globe. I do not think the iPhone, which have been purchased by the owners and developers of these services, will help as the iPhone is an elite tool, that is not like the messaging experience for the hundreds of millions of mobile users around the globe. Developers not building or considering services for people to use on the devices or application of their choice is rather broken development these days. Google gets it with Google Gears and their mobile efforts as does Yahoo with its Yahoo Mobile services and other cross platform efforts.
Broken Interaction Means More Money?
I understand the reasoning behind the services adding steps and making the experience painful, it is seen as money in their pockets through pushing ads. The web is a relatively means of tracking and delivering ads, which translates into money. But, inflicting unneeded pain on their customers can not be driven by money. Pain on customers will only push them away and leave them with fewer people to look at the ads. I am not advocating giving up advertising, but moving ads into the other channels or building solutions that deliver the messages to people who want the messages and not just notification they have a message.
These services were somewhat annoying, but they have value in the services to keep somebody going back. When Pownce arrived on the scene a month or so ago, it included the broken messaging, but did not include mobile or RSS feeds. Pownce only provides e-mail notifications, but they only point you back to the site. That is about as broken as it gets for a messaging and status service. Pownce is a beautiful interface, with some lightweight sharing options and the ability to build groups, and it has a lightweight desktop applications built on Adobe AIR. The AIR version of Pownce is not robust enough with messaging to be fully useful. Pownce is still relatively early in its development, but they have a lot of fixing of things that are made much harder than they should be for consuming information. They include Microfomats on their pages, where they make sense, but they are missing the step of ease of use for regular people of dropping that content into their related applications (putting a small button on the item with the microformat that converts the content is drastically needed for ease of use). Pownce has some of the checkboxes checked and some good ideas, but the execution of far from there at the moment. They really need to focus on ease of use. If this is done maybe people will comeback and use it.
So who does this well? Twitter has been doing this really well and Jaiku does this really well on Nokia Series60 phones (after the first version Series60). Real cross platform and cross channel communication is the wave of right now for those thinking of developing tools with great adoption. The great adoption is viable as this starts solving technology pain points that real people are experiencing and more will be experiencing in the near future. (Providing a solution to refindability is the technology pain point that del.icio.us solved.) The telecoms really need to be paying attention to this as do the players in all messaging services. From work conversations and attendees to the Personal InfoCloud presentation, they are beginning to get the person wants and needs to be in control of their information across devices and services.
Twitter is a great bridge between web and mobile messaging. It also has some killer features that add to this ease of use and adoption like favorites, friends only, direct messaging, and feeds. Twitter gets messaging more than any other service at the moment. There are things Twitter needs, such as groups (selective messaging) and an easier means of finding friends, or as they are now appropriately calling it, people to follow.
Can we not all catch up to today's messaging needs?
Emily Chang's post about her My Data Stream brought back memories from a ton of conversations last year. I captured a few of these ideas in a relatively short Life Data Stream post over at Personal InfoCloud, which has comments turned on.
You may want to take a look at TechMeme for related posts.
Stikkit is My In-line Web Triage
I have been using Stikkit, from the bookmarklet, as my in-line web information triage. If I find an event or something I want to come back to latter (other than to read and bookmark) I pop that information into Stikkit. Most often it is to remind me of deadlines, events, company information, etc. I open the Stikkit bookmarklet and add the information. The date information I add is dumped into my Stikkit calendar, names and addresses are put into the Stikkit address book, and I can tag them for context for easier retrival.
Now with the addition of the API Stikkit is now easy to retrieve a vCard, ical, or other standard data format I can drop into my tools I normally aggregate similar information. I do not need to refer back to Stikkit to copy and paste (or worse mis-type) into my work apps.
I can also publish information from my preferred central data stores to Stikkit so I have web access to events, to dos, names and addresses, etc. From Stikkit I can then share the information with only those I want to share that information with.
Stikkit is growing to be a nice piece for microcontent tracking in my Personal InfoCloud.
From an e-mail chat last week I found out that .net magazine (from the UK) is now on the shelves in the US as "Web Builder". Now that I have this knowledge I found the magazine on my local bookstore shelves with ease. Oddly, when I open the cover it is all ".net".
Rebranding and Crossbranding
In the chat last week I was told the ".net" name had a conflict with a Microsoft product and the magazine is not about the Microsoft product in the slightest, but had a good following before the MS product caught on. Not so surprisingly the ".net" magazine does not have the same confusion in the UK or Europe.
So, the magazine had a choice to not get noticed or rebrand the US version to "Web Builder" and put up with the crossbranding. This is not optimal, as it adds another layer of confusion for those of us that travel and are used to the normal name of the product and look only for that name. Optimally one magazine name would be used for the English language web design and development magazine. If this every happens it will mean breaking a well loved magazine name for the many loving fans of it in the UK and Europe
What is Special About ".net" or "Web Builder"?
Why do I care about this magazine? It is one of the few print magazines about web design and web development. Not only is it one of the few, but it flat out rocks! It takes current Web Standards best practices and makes them easy to grasp. It is explaining all of the solid web development practices and how to not only do them right, but understand if you should be doing them.
I know, you are saying, "but all of this stuff is already on the web!" Yes, this stuff is on the web, but not every web developer lives their life on the web, but most importantly, many of the bosses and managers that will approve this stuff do not read stuff on the web, they still believe in print. Saying the managers need to grow-up and change is short-sighted. One of the best progressive thinkers on technology, Doc Searls is on the web, but he also has a widely read regular column in Linux Journal. But, for me the collection of content in ".net" is some of the best stuff out there. I read it on planes and while I am waiting for a meeting or appointment.
I know the other thing many of you are saying, "but it is only content from UK writers!" Yes, so? The world is really flat and where somebody lives really makes little difference as we are all only a mouse click away from each other. We all have the same design and development problems as we are living with the same browsers and similar people using what we design and build. But, it is also amazing that a country that is a percentage the size of the US has many more killer web designers and developers than the US. There is some killer stuff going on in the UK on the web design and development front. There is great thought, consideration, and research that goes into design and development in the UK and Europe, in the US it is lets try it and see if it works or breaks (this is good too and has its place). It is out of the great thought and consideration that the teaching and guiding can flow. It also leads to killer products. Looking at the Yahoo Europe implementations of microformats rather far and wide in their products is telling, when it has happened far slower in the Yahoo US main products.
Now I am just hoping that ".net" will expand their writing to include a broader English speaking base. There is some killer talent in the US, but as my recent trip to Australia showed there is also killer talent there too. Strong writing skills in English and great talent would make for a great global magazine. It could also make it easier to find on my local bookstore shelves (hopefully for a bit cheaper too).
Two Conferences Draw FocusI am now getting back to responding to e-mail sent in the last two or three weeks and digging through my to do list. As time wears I am still rather impressed with both XTech and the Microlearning conferences. Both have a focus on information and data that mirrors my approaches from years ago and are the foundation for how I view all information and services. Both rely on well structured data. This is why I pay attention and keep involved in the information architecture community. Well structured data is the foundation of what falls into the description of web 2.0. All of our tools for open data reuse demands that the underlying data is structured well.
Simplicity of the Complex
One theme that continually bubbled up at Microlearning was simplicity. Peter A. Bruck in his opening remarks at Microlearning focussed on simplicity being the means to take the complex and make it understandable. There are many things in the world that are complex and seemingly difficult to understand, but many of the complex systems are made up of simple steps and simple to understand concepts that are strung together to build complex systems and complex ideas. Every time I think of breaking down the complex into the simple components I think of Instructables, which allows people to build step-by-step instructions for anything, but they make each of the steps as reusable objects for other instructions. The Instructables approach is utterly brilliant and dead in-line with the microlearning approach to breaking down learning components into simple lessons that can be used and reused across devices, based on the person wanting or needing the instruction and providing it in the delivery media that matches their context (mobile, desktop, laptop, tv, etc.).
Simple Clear Structures
This structuring of information ties back into the frameworks for syndication of content and well structured data and information. People have various uses and reuses for information, data, and media in their lives. This is the focus on the Personal InfoCloud. This is the foundation for information architecture, addressable information that can be easily found. But, in our world of information floods and information pollution due to there being too much information to sort through, findability of information is important as refindability (this is rarely addressed). But, along with refindability is the means to aggregate the information in interfaces that make sense of the information, data, and media so to provide clarity and simplicity of understanding.
Europe Thing Again
Another perspective of the two conferences was they were both in Europe. This is not a trivial variable. At XTech there were a few other Americans, but at Microlearning I was the only one from the United States and there were a couple Canadians. This European approach to understanding and building is slightly different from the approach in the USA. In the USA there is a lot of building and then learning and understanding, where as in Europe there seems to be much more effort in understanding and then building. The results are somewhat different and the professional nature of European products out of the gate where things work is different than in the USA. This was really apparent with System One, which is an incredible product. System One has all the web 2.0 buzzwords under the hood, but they focus on a simple to use tool that pulls together the best of the new components, but only where it makes sense to create a simple tool that addresses complex problems.
Culture of Understanding Complex to Make Simple
It seems the European approach is to understand and embrace the complex and make it simple through deep understanding of how things are built. It is very similar to Instructables as a culture. The approach in the USA seems to include the tools, but have lacked the understanding of the underlying components and in turn have left out elements that really embrace simplicity. Google is a perfect example of this approach. They talk simplicity, but nearly every tool is missing elements that make it fully usable (calendar not having sync, not being able to only have one or two Google tools on rather than everything on). This simplicity is well understood by the designers and they have wonderful solutions to the problems, but the corporate culture of churning things out gets in the way.
Breaking It Down for Use and Reuse
Information in simple forms that can be aggregated and viewed as people need in their lives is essential to us moving forward and taking the pain out of technology that most regular people experience on a daily basis. It is our jobs to understand the underlying complexity, create simple usable and reusable structures for that data and information, and allow simple solutions that are robust to be built around that simplicity.
I have had a little time to sit back and think about XTech I am quite impressed with the conference. The caliber of presenter and the quality of their presentations was some of the best of any I have been to in a while. The presentations got beneath the surface level of the subjects and provided insight that I had not run across elsewhere.
The conference focus on browser, open data (XML), and high level presentations was a great mix. There was much cross-over in the presentations and once I got the hang that this was not a conference of stuff I already knew (or presented at a level that is more introductory), but things I wanted to dig deeper into. I began to realize late into the conference (or after in many cases) that the people presenting were people whose writting and contributions I had followed regularly when I was doing deep development (not managing web development) of web applications. I changed my focus last Fall to get back to developing innovative applications, working on projects that are built around open data, and that filled some of the many gaps in the Personal InfoCloud (I also left to write, but that did get side tracked).
As I mentioned before, XTech had the right amount of geek mindset in the presentations. The one that really brought this to the forefront of my mind was on XForms, an Alternative to Ajax by Erik Bruchez. It focussed on using XForms as a means to interact with structured data with Ajax.
Once it dawned on me that this conference was rather killer and I sould be paying attention to the content and not just those in the floating island of friends the event was nearly two-thirds the way through. This huge mistake on my part was the busy nature of things that lead up to XTech, as well as not getting there a day or two earlier to adjust to the time, and attend the pre-conference sessions and tutorials on Ajax.
I was thrilled ot see the Platial presentation and meet the makers of the service. When I went to attend Simon Willison's presentation rather than attending the GeoRSS session, I realized there was much good content at XTech and it is now one on my must attend list.
As the conference was progressing I was thinking of all of the people that would have really benefitted and enjoyed XTech as well. A conference about open data and systems to build applications with that meet real people's needs is essential for most developers working out on the live web these days.
If XTech sounded good this year in Amsterdam, you may want to note that it will be in Paris next year.
In the past few days of being wrapped up in moving this site to a new host and client work, I have come across a couple items that have similar DNA, which also relate to my most recent post on the Come to Me Web over at the Personal InfoCloud.
Sites to Flows
The first item to bring to light is a wonderful presentation, From Sites to Flows: Designing for the Porous Web (3MB PDF), by Even Westvang. The presentation walks through the various activities we do as personal content creators on the web. Part of this fantastic presentation is its focus on microcontent (the granular content objects) and its relevance to context. Personal publishing is more than publishing on the web, it is publishing to content streams, or "flows" as Even states it. These flows of microcontent have been used less in web browsers as their first use, but consumed in syndicated feeds (RDF, RSS/Atom, Trackback, etc.). Even moves to talking about Underskog, a local calendaring portal for Oslo, Norway.
The Publish/Subscribe Decade
Salim Ismail has a post about The Evolution of the Internet, in which he states we are in the Publish/Subscribe Decade. In his explanation Salim writes:
The web has been phenomonally successful and the amount of information available on it is overwhelming. However, (as Bill rightly points out), that information is largely passive - you must look it up with a browser. Clearly the next step in that evolution is for the information to become active and tell you when something happens.
It is this being overwhelmed with information that has been of interest to me for a while. We (the web development community) have built mechanisms for filtering this information. There are many approaches to this filtering, but one of them is the subscription and alert method.
The Come to Me Web
It is almost as if I had written Come to Me Web as a response or extension of what Even and Salim are discussing (the post had been in the works for many weeks and is an longer explanation of a focus I started putting into my presentations in June. This come to me web is something very few are doing and/or doing well in our design and development practices beyond personal content sites (even there it really needs a lot of help in many cases). Focussing on the microcontent chunks (or granular content objects in my personal phraseology) we can not only provide the means for others to best consume our information we are providing, but also aggregate it and provide people with better understanding of the world around them. More importantly we provide the means to best use and reuse the information in people's lives.
Important in this flow of information is to keep the source and identity of the source. Having the ability to get back to the origination point of the content is essential to get more information, original context, and updates. Understanding the identity of the content provider will also help us understand perspective and shadings in the microcontent they have provided.
I also brought back to the link to just the Off the Top RSS feed, which has nothing but the last 10 entries in archaic RSS .91 format. I still am offering the wonderful Feedburner for Off the Top option, which has Off the Top entries, my del.icio.us entries, and my Flickr photo feed all bundled in one. I have quite a few people reading this in RSS on mobile devices at the moment and I thought I would make it easier for other that are going that route to get just the content of Off the Top.
I have posted my presentation from yesterday's session at WebVisions, in Portland, Oregon. The files, Designing for the Personal InfoCloud are in PDF format and weigh in at 1.3MB.
I really had a blast at the conference and wish I could have been there the whole day. I will have to say from the perspective of a speaker it is a fantastically run conference. Brad Smith of Hot Pepper Studios did a knock out job pulling this conference together. It should be on the must attend list for web developers. I was impressed with the speakers, the turn out, and how well everything was run. Bravo!
WebVisions is held in one of my favorite cities, Portland, Oregon, which has some of the best architecture and public planning of any North American city. I have more than 300 photos I have taken in 48 hours and will be posting many at Flickr in the next couple of days.
Mike just posted a killer international and language-free RSS logo button on his site. I really like it. Mainly is works for those of use who understand the RSS text version, but for those who are not as technically forward or in non-English/Western languages this could still work. The RSS and XML text on the buttons always need explanation to those not familiar with the terms. The end of many of the tutorials is often, "just click it, you do not really need to know what it means, just click". Something tells me Mike is on to something profound yet so wonderfully simple.
I think Tom's pointer to the BBC is a fairly good transition to where we are heading. It will take the desktop OS or browser to make it easier. Neither of these are very innovative or quickly adaptive on the Windows side of the world.
Firefox was the first browser (at least that I know of) to handle RSS outside the browser window, but it was still done handled in a side-window of the browser. Safari has taken this to the next step, which is to use a mime-type to connect the RSS feed to the desktop device of preference. But, we are still not where we should be, which is to click on the RSS button on a web page and dump that link into ones preferred reader, which may be an application on the desktop or a web/internet based solution such as Bloglines.
All of this depends on who we test as users. Many times as developers we test in the communities that surround us, which is a skewed sample of the population. If one is in the Bay Area it may be best to go out to Stockton, Modesto, Fresno, or up to the foothills to get a sample of the population that is representative of those less technically adept, who will have very different usage patterns from those we normally test.
When we test with these lesser adept populations it is the one-click solutions that make the most sense. Reading a pop-up takes them beyond their comfort zone or capability. Many have really borked things on their devices/machines by trying to follow directions (be they well or poorly written). Most only trust easy solutions. Many do not update their OS as it is beyond their trust or understanding.
When trends start happening out in the suburbs, exurbs, and beyond the centers of technical adeptness (often major cities) that is when they have tipped. Most often they tip because the solutions are easy and integrated to their technical environment. Take the Apple iPod, it tipped because it is so easy to set up and use. Granted the lack of reading is, at least, an American problem (Japanese are known to sit down with their manuals and read them cover to cover before using their device).
We will get to the point of ease of use for RSS and other feeds in America, but it will take more than just a text pop-up to get us there.
A state of the newspaper industry article in today's Washington Post tries to define what people want from newspapers and what people are doing to get information.
Me? I find that newspapers provide decent to great content. Newspapers are losing readers of their print versions, but most people I know are new reading more than one paper, but online. The solutions I see from my vantage are as follows.
The articles rarely have ads that relate to the stories, foolishly missing ad revenues. The ads that are available are distracting and make for an extremely poor experience for the reader. News sites should ban the improperly targeted inducements that rely on distracting from reading the article, which is the reason the person is on that web page. The person has an interest in the topic. There are monetary opportunities to be had if the news outlets were smart and advertisers were smart.
How? If I am reading an article on the San Francisco Giants I would follow and may pay a little something for an ad targeted to this interest of mine. I like to buying Giants tickets, paraphernalia, a downloadable video of the week's highlights, etc. If I am reading about an airline strike a link to train tickets would be a smart option. A news article about problems in the Middle East could have links to books by the journalist on the subject, other background books or papers, links to charitable organizations that provide support in the region. The reader has shown an interest, why not offer something that will also be of interest?
We know that advertisers want placement in what they consider prime territory, the highly trafficked areas of the site. Often this is when the non-targeted ads appear. This is an opportunity to have non-targeted ads pay a premium, say five to 20 times that of targeted ads. The non-targeted ads have to follow the same non-disruptive guidelines that targeted ads follow. This is about keeping the readers around, without readers selling ads does not make any sense.
One area the news site are driving me crazy is access to the archives. The news sites that require payment to view articles in the archives are shooting themselves in the foot with this payment method and amount required to cough up to see an article that may or may not be what the person interested is seeking. The archives have the same opportunity to sell related ads, which in my non-professional view, would seem like they would have more value as the person consuming the information has even more of an interest as they are more than a casual reader. Any payment by the person consuming the information should never be more than the price for the whole print version. The articles cost next to nothing to store and the lower the price the more people will be coming across the associated advertising.
Blogging and personal sites often point to news articles. Many of us choose whom we are going to point to based on our reader's access to that information at any point in the future. We may choose a less well written article, but knowing it will be around with out having to pay extortionist rates to see it is what many of us choose. Yes, we are that smart and we are not as dumb as your advertisers are telling you. We, the personal site writers are driving potential ad revenues to you for free, if you open your articles for consumption.
Loyalty to one paper is dead, particularly when there are many options to choose to get our news from. We can choose any news source anywhere in the world. Why would we choose yours? Easy access, good writing, point of view, segment coverage (special interests - local, niche industries, etc), etc. are what drive our decisions.
I often choose to make my news selections to include sources from outside my region and even outside my country. Why? I like the educated writing style that British sources offer. I like other viewpoints that are not too close to the source to be tainted. I like well researched articles. I like non-pandering viewpoints. This is why I shell out the bucks for the Economist, as it is far better writing than any other news weekly in the U.S. and it pays attention to what is happening around the world, which eventually will have an impact to me personally at some point in the future. I don't have patience for mediocrity in journalism and the standards for many news sources have really slipped over the past few years.
News sources should offer diversity of writing style and opinion of one source will attract attention. The dumbing down of writing in the news has actually driven away many of those that are willing to pay to read the print versions. Under educated readers are not going to pay to read, even if it is dumbed down. Yes, the USA Today succeeded in that, but did you really want those readers at the loss of your loyal revenue streams?
Loyalty also requires making the content available easily across devices. Time and information consumption has changed. We may start reading an article in the print edition (even over somebody's shoulder and want to follow-up with it. We should be able to easily find that article online at our desk or from our mobile device. Integration of access across devices is a need not a nicety and it is not that difficult to provide, if some preparation is done with the systems. Many of us will pull RSS feeds from our favorite news sources and flag things for later consumption, but the news sites have not caught on how to best enable that. We may pull feeds at one location, but may have the time and focus to read them at another location, but we may not have the feeds there. Help those of us that are loyal consume your information in a pan-medium and pan-device world that we live in.
The "My" portal hype died for all but a few central "MyX" portals, like my.yahoo. Two to three years ago "My" was hot and everybody and their brother spent a ton of money building a personal portal to their site. Many newspapers had their own news portals, such as the my.washingtonpost.com and others. Building this personalization was expensive and there were very few takers. Companies fell down this same rabbit hole offering a personalized view to their sites and so some degree this made sense and to a for a few companies this works well for their paying customers. Many large organizations have moved in this direction with their corporate intranets, which does work rather well.
Where Do Personalization Portals Work Well
The places where personalization works points where information aggregation makes sense. The my.yahoo's work because it is the one place for a person to do their one-stop information aggregation. People that use personalized portals often have one for work and one for Personal life. People using personalized portals are used because they provide one place to look for information they need.
The corporate Intranet one place having one centralized portal works well. These interfaces to a centralized resource that has information each of the people wants according to their needs and desires can be found to be very helpful. Having more than one portal often leads to quick failure as their is no centralized point that is easy to work from to get to what is desired. The user uses these tools as part of their Personal InfoCloud, which has information aggregated as they need it and it is categorized and labeled in a manner that is easiest for them to understand (some organizations use portals as a means of enculturation the users to the common vocabulary that is desired for use in the organization - this top-down approach can work over time, but also leads to users not finding what they need). People in organizations often want information about the organization's changes, employee information, calendars, discussion areas, etc. to be easily found.
Think of personalized portals as very large umbrellas. If you can think of logical umbrellas above your organization then you probably are in the wrong place to build a personalized portal and your time and effort will be far better spent providing information in a format that can be easily used in a portal or information aggregator. Sites like the Washington Post's personalized portal did not last because of the cost's to keep the software running and the relatively small group of users that wanted or used that service. Was the Post wrong to move in this direction? No, not at the time, but now that there is an abundance of lesson's learned in this area it would be extremely foolish to move in this direction.
You ask about Amazon? Amazon does an incredible job at providing personalization, but like your local stores that is part of their customer service. In San Francisco I used to frequent a video store near my house on Arguello. I loved that neighborhood video store because the owner knew me and my preferences and off the top of his head he remembered what I had rented and what would be a great suggestion for me. The store was still set up for me to use just like it was for those that were not regulars, but he provided a wonderful service for me, which kept me from going to the large chains that recorded everything about me, but offered no service that helped me enjoy their offerings. Amazon does a similar thing and it does it behind the scenes as part of what it does.
How does Amazon differ from a personalized portal? Aggregation of the information. A personalized portal aggregates what you want and that is its main purpose. Amazon allows its information to be aggregated using its API. Amazon's goal is to help you buy from them. A personalized portal has as its goal to provide one-stop information access. Yes, my.yahoo does have advertising, but its goal is to aggregate information in an interface helps the users find out the information they want easily.
Should government agencies provide personalized portals? It makes the most sense to provide this at the government-wide level. Similar to First.gov a portal that allows tracking of government info would be very helpful. Why not the agency level? Cost and effort! If you believe in government running efficiently it makes sense to centralize a service such as a personalized portal. The U.S. Federal Government has very strong restriction on privacy, which greatly limits the login for a personalized service. The U.S. Government's e-gov initiatives could be other places to provide these services as their is information aggregation at these points also. The downside is having many login names and password to remember to get to the various aggregation points, which is one of the large downfalls of the MyX players of the past few years.
What Should We Provide
The best solution for many is to provide information that can be aggregated. The centralized personalized portals have been moving toward allowing the inclusion of any syndicated information feed. Yahoo has been moving in this direction for some time and in its new beta version of my.yahoo that was released in the past week it allows the users to select the feeds they would like in their portal, even from non-Yahoo resources. In the new my.yahoo any information that has a feed can be pulled into that information aggregator. Many of us have been doing this for some time with RSS Feeds and it has greatly changed the way we consume information, but making information consumption fore efficient.
There are at least three layers in this syndication model. The first is the information syndication layer, where information (or its abstraction and related metadata) are put into a feed. These feeds can then be aggregated with other feeds (similar to what del.icio.us provides (del.icio.us also provides a social software and sharing tool that can be helpful to share out personal tagged information and aggregations based on this bottom-up categorization (folksonomy). The next layer is the information aggregator or personalized portals, which is where people consume the information and choose whether they want to follow the links in the syndication to get more information.
There is little need to provide another personalized portal, but there is great need for information syndication. Just as people have learned with internet search, the information has to be structured properly. The model of information consumption relies on the information being found. Today information is often found through search and information aggregators and these trends seem to be the foundation of information use of tomorrow.
We are now providing a consolidated feed of the main blog RSS feed, our vanderwal del.icio.us feed, and vanderwal Flickr feed in one vanderwal.net Feedburner feed. You ask about the feed of our Quick Links? Currently, it is not included in the Feedburner feed, but we have optimized that active feed with Feedburner also at, vanderwal.net Quick Link FeedBurner feed.
If you like the feeds the way you have been getting them you can still do so. Lately the Quick Link and del.icio.us feeds are being updated more frequently as they take much less time to post to. These are just snippets of things I am interested in coming back to or have found of interest and have not found the time for a full blog entry.
We are considering replacing the Quick Links with our del.icio.us feed at some point in the not too distant future. Tell us what you think.
Paul wants to "set up one of those link-sidebar thingies again" for his quick link list. Actually I am finding those the side link lists, like mine cause problems for folks tracking referrer links back and for search engines. Context of the links is helpful, but so is being able to find the date and page where these links came from. The way Paul is doing his quick links now works well. I was able to point directly to these links, the links he make have context, even if it is only a list of links.
Quite similar to the Fixing Permalink to Mean Something post the other day, the links in the side bar are temporary. I find links from Technorati back to my site from some poor soul looking for what comment and link vanderwal.net had placed. These links do not have a permalink as they are ever rotating. I have received a few e-mails asking where the link was from and if I was spamming in some way.
Why do I have the quick links? I don't have the time to do a full or even short write-up. I clear my tabbed browser windows and put the items I have not read in full in the Quick Links. Some things I want access to from my mobile device or work to read the info in full or make use of the information. Other things I want to keep track of and include in a write-up.
The other advantage of moving the quick links into the main content area is they would be easier to include in one aggregated feed. I know I can join my current feeds, but I like the sites that provide the feeds in the same context as they appear on the site as it eases the ability to find the information. This change will take a more than a five or ten minute fix for my site, but it is on my to do list.
Coming back from six plus days of being untethered from the net I found I had 1117 unread RSS feeds. This is worse than my personal e-mail stack, which was just over 550 (I get to my work e-mail stack tomorrow, which averages about 80 e-mails per day). The RSS feeds really threw me as I was not expecting it to have snowballed like that.
There were a few things I was wanting to follow that I knew may pop their heads up while I was away so I followed these on my Treo 600 on Google News and del.icio.us aggregator. I was able to find most of what I was looking for and do a quick read and then e-mail an annotated link to one of my personal e-mail accounts. I did find some things on del.icio.us that I just copied into my del.icio.us bookmarks so I could come back to them later.
I got far less done on the writing front as my son was along for the vacation, which made it a real family vacation and not the usual working vacation with the laptop on my lap on the front porch when I am not playing in the waves. No, I would not say I am rested, but I do have more wonderful memories of a great summer get away. Our time schedules shifted to a 10 month-old's eating and sleeping schedule. When we drifted to our normal shore vacation schedule we had a cranky kid, which only took two days to convert to a vacation fully focused on the kid. We met many wonderful new people, stayed in a different B&B, and found a new restaurant to add to our favorites.
I am now ready for the last two days of the week and to start responding to e-mail tomorrow. I am also ready to tackle my writing assignments that are well over due. My laptop is also fully updated with OS and software updates that make it really sing, too bad Windows updates never make the machine perceivably faster.
Time has been very thin of late. In the past six months or so started noticing an increasing number of links from del.icio.us and started pulling the feeds of some folks I like to follow their reading list into my site feed aggregator. I had about four or five del.icio.us feeds in my aggregator (meta aggregation of other's meta aggregations - MetaAg MetaAg). This past week I was taking medicine that tweaked by sleep patterns so I had some free awake time after midnight and I finally set up my own vanderwal del.icio.us feed.
I like having the ability to pull [meta] tags aggregations that others have used, like security, which is a great help during the day at work. I can also track some topics I keep finding myself at the periphery and ever more interested in as they tie to some personal projects.
I did consider something similar with Feedster, but it was down for updating recently when I had the tiny bit of time to fiddle with setting something up. By the way, Feedster is now Standards-based (not fully valid, but rather close) and it loads very quickly (most of the time).
Tantek mulls a means to keep contact info upto date. This should be much easier than Tantek has made out. This could be as easy as publishing one's own vcard that is pointed to with RSS. When the vcard changes the RSS feed notifies the contact info repositories and they grab the vcard and update the repository's content. This is essentially pulling content information into the user's Personal InfoCloud. (Contact info updating and applications are a favorite subject of mine to mull over.)
Why vcard? It is a standard sharing structure that all contact information applications (repositories understand). Most of us have more than one contact repository: Outlook at work; Lotus Organizer on the workstation at home; Apple Address Book and Entourage on the laptop; Palm on the Cellphone PDA; and Addresses in iPod. All of these applications should synch and perfectly update each other (deleting and updating when needed), but they do not. Keeping vcard field names and order constant should permit the info to have corrective properties. The vCard RDF W3C specifications seem to layout existing standards that should be adopted for a centralized endeavor.
What not Plaxo? Plaxo is limited to applications I do not run everywhere (for their download version) and its Web version is impractical as when I need contact information I am most often not in front of a terminal, I am using a Treo or pulling the information out of my iPod.
While Tantek's solution is good and somewhat usable it is not universal as a vCard RDF would be with an application that pinged the XML file to check for an update daily or every few days.
Tech Review interviews Rael about rising tech trends and discusses alpha geeks. This interview touches on RSS, mobile devices, social networks, and much more.
Three times the past week I have run across folks mentioning Hand/RSS for Palm. This seems to fill the hole that AvantGo does not completely fill. Many of the information resources I find to be helpful/insightful have RSS feeds, but do not have a "mobile" version (more importantly the content is not made with standard (X)HTML validating markup with a malleable page layout that will work for desktop/laptop web browsers and smaller mobile screens).
I currently pull to scan then read content from 125 RSS feeds. Having these some of these feeds pulled and stored in my PDA would be a great help.
Content, make that information in general, stored and presented in a format that is only usable in one device type or application is very short sighted. Information should be reusable to be more useful. Users copy and paste information into documents, todo lists, calendars, PDAs, e-mail, weblogs, text searchable data stores (databases, XML respositories, etc.), etc. Digital information from the early creation was about reusing the information. Putting text only in a graphic is foolish (AIGA websites need to learn this lesson) as is locking the information in a proprietary application or proprietary format.
The whole of the Personal Information Cloud, the rough cloud of information that the user has chosen to follow them so that it is available when they need that information is only usable if information is in an open format.
I am happy that RSS changed by Web reading habits as the past to nights with limited Internet access I was able to crack open NetNewsWire and caught up on some reading from the 93 feeds I currently pull in. I have always been curious why some folks would post only titles of articles in RSS or RDF feeds and not even a short summary or teaser. Now this really frustrates me as I only could read titles, some which were intriguing and could have warranted me putting them in a "to read" status, but instead I ignored them and jumped to items that offered summaries or full content.
Tantek discusses Jeffrey Zeldman handrolling his own RSS feeds (as well as his own site). Tantek also discusses those who still handroll their own weblogs as well as those that have built their own CMS to run their blogs. This was good to see that there are many other that still build their own and handroll (I stopped handrolling October 2001 when I implimented my own CMS that took advantage of a travel CMS I had built for myself).
Parse RSS at All Cost by Mark Pilgrim what is required to parse RSS properly. More importantly Mark points out that as more RSS feeds are created the feeds are being poorly created. Then Mark instructs how to build a parser that will be a little more forgiving of poor markup.
Not long after I posted my RSS disconnecting the creator and the user comments it started sinking in that it really does not matter. We it does to some part, but from a user's perspective the RSS allows a quicker more efficient method of scanning for information they have an interest in and easily see from one interface when new content has been written. I use other's blogs and digests to find information to post for my own reflection and to use as jumping boards to new ideas.
Yes, the interaction between creator and user is important, but it is not as important as getting informtion out. I began thinking that the whining about the lack of interaction on my part was rather selfish and very contrary to the focus I have for most information, which is having the abiltiy to access, digest, add to, or reformulate the information into another medium or presentation that will offer possibly better understanding.
I was self-taught in the values of the Clue Train so when I heard about it for the first time I was supprised so some large degree that the manifesto had resonance and turned on a light for many people, for myself and some others, I guess we drank the cool-aide early, as we thought this was the way things were or should be from the beginning of electronic information and a truely open community where information flows freely. Yes, the RSS/RDF/XML feed is a freer flow of information and puts the choice of the information consuption in the user's hands.
Since I added the vanderwal.net RSS feed I have been picking up other RSS and RDF feeds. I have been using Ranchero's NetNewsWire Lite to pull many feeds of sites I read on a regular basis. I have become a convert to RSS/RDF extracts. They are a time saver for seeing only updated sites. I have read feeds of many of the news sites from MacReporter for quite sometime, but having personal content and blogs pulled in is quite a timesaver and allows me to get through more information.
I do see a downside of the XML feeds, in the disconnection of the creator from the users. The Web has given us the ability to have digital ghosts that we know come to our sites and possibly read content. This is much like Plato's cave shadow people, in that we do not see the actual people that come to the sites, but we surmise what these visitors are like and what they come to read. Occasionally we receive comments on the site, e-mails from visitors, or best meet folks in person that read/experience your work. It is very much a disconnected work that is built from guesses, for those that try and care (some just build for themselves resources to be used remotely and all others are welcome "free riders", like here). The XML feeds seem to take away another level of the "interaction" between the creator and the users. This relationship is important in communication as the feedback helps shape the message as well as offer paths for both parties to learn and grow.
The XML feeds offer the consumers of the information easier and more efficient means of getting, filtering, and digesting information, but the return path to the creator is diminished. The feeds are a consumer oriented communication channel and not so much an interactive communiction channel. The down side is a lack of true interactive communication, which becomes more of a consuming produced products, much like frozen dinners that get popped in the microwave. The interaction provides the creator with an understanding of how the user consumes the information and what the consumer of the information is finding usable and how the consumer is being drawn to the information. When one cooks their own meals or is being cooked for the meal can be spiced and seasoned appropriately for consumption. The presentation of the food can be modified to enhance pleasure. The live cooking process allows for feedback and modification. Much like the interaction of information in a communication scenario the creator and the consumer have a relationship, as the creator finds the structure and the preferred means of consuming the information the presentation and structure of the information can be altered appropriately.
In a sense the XML feed could be seen as one type of information structure of presentation. There are other options available that can be used to bring back the interaction between the creator and consumer. Relationships and connections are built over this expansive medium of the Web through information and experience. These connections should be respected and provided a place to survive.
Yes, I finally got up to speed with the rest of the world and added an RSS feed and have added a new page that will track available vanderwal.net XML documents and RSS feeds. I may make a couple category specific RSS feeds as there is interest. Use the (now working again) comments or the contact to let me know what you would like.
I have only put out the first RSS feed in 0.91 at the moment. I may upgrade it in the near future as I now have it relatively easy to build from my end. I have been getting a decent amount of pestering and bother from folks asking for the feed. You see I still build my own CMS for the site and it takes time and priority to get around to some of these things.
Why not move to Movable Type or Drupal (the only two I would currently consider)? I enjoy building my own system, but it does require that I build my own APIs or even my own applications to mirror functionality. I like building CMS and this one is one of six that I have designed and/or fully built since 1997. It is like a hobby for me as well as a job.
Content management is back at the forefront of every aspect of my digital life again. Content management revolves around keeping information current, accurate, and reusable (there are many more elements, but these cut to the core of many issues). Maintaining Websites and providing information resources on the broader Internet have revolved around static Web pages or information stored in MS Word, PDF files, etc. Content management has been a painful task of keeping this information current and accurate across all these various input and output platforms. This brings us to content management systems (CMS).
As I pointed to earlier, there are good resources for getting and understanding CMS and how our roles change when we implement a CMS. Important to understanding is the separation of content (data and information), from the presentation (layout and style), and from the application (PDF, Web page, MS Word document, etc.). This requires an input mechanism, usually a form that captures the information and places it in is data/information store, which may be a database, XML document, or a combination of these. This also provides for a workflow process that involved proofing and editing the information along with versioning the information.
Key to the CMS is separation of content, which means there needs to be a way to be a method of keeping links aside from the input flow. Mark Baker provides a great article, What Does Your Content Management System Call This Guy about how to handle links. Links are an element that separates the CMS-lite tools (Blogger, Movable Type, etc.) from more robust CMS (other elements of difference are more expansive workflow, metadata capturing, and content type handling (images, PDF, etc. and their related metadata needs)). Links in many older systems, often used for newspaper and magazine publications (New York Times and San Francisco Chronicle) placed their links outside of the body of the article. The external linking provided an easy method of providing link management that helps ensure there are no broken links (if an external site changes the location (URL) it there really should only be one place that we have to modify that link, searching every page looking for links to replace). The method in the Baker article outlines how many current systems provide this same service, which is similar to Wiki Wiki's approach. The Baker outlined method also will benefit greatly from all of the Information Architecture work you have done to capture classifications of information and metadata types (IA is a needed and required part of nearly every development process).
What this gets us is content that we can easily output to a Web site in HTML/XHTML in a template that meets all accessibility requirements, ensures quality assurance has been performed, and provides a consistent presentation of information. The same information can be output in a more simple presentation template for handheld devices (AvantGo for example) or WML for WAP. The same information can be provided in an XML document, such as RSS, which provides others access to information more easily. The same information can be output to a template that is stored in PDF that is then sent to a printer to output in a newsletter or the PDF distributed for the users to print out on their own. The technologies for information presentation are ever changing and CMS allows us to easily keep up with these changes and output the information in the "latest and greatest", while still being able to provide information to those using older technologies.
This site will be adding an RSS feed in the near future. RSS allows you to pull together a bunch of RSS files from your favorite sites and parse them then build your own link page, with summaries, to current articles or postings from your favorite sites.