Off the Top: Standards Entries


January 1, 2016

Happy New Year - 2016 Edition

Happy New Year!

I’m believing that 2016 will be a good year, possibly a quite good year. After 7 years of bumpy and 2015 off to a rough start on the health front it stayed rather calm.

I don’t make resolutions for the new year. It is a practice that always delayed good timing of starting new habits and efforts when they were better fits. The, “oh, I start doing this on on New Years” always seemed a bit odd when the moment something strikes you is a perfectly good moment to start down the path to improvement or something new.

This blog has been quiet for a while, far too long in fact. Last year when I was sick it disrupted a good stretch of posting on a nearly daily basis. I really would like to get back to that. I was planning to start back writing over the past couple weeks, but the schedule was a bit filled and chaotic.

Digging Through Digital History

This past year I did a long stretch working as expert witness on a social software case. The case was booted right before trial and decided for the defense (the side I was working with). In doing this I spent a lot of time digging back through the last 5 to 10 years of social software, web, enterprise information management, tagging / folksonomy, and communication. Having this blog at my disposal and my Personal InfoCloud blog were a great help as my practice of knocking out ideas, no matter how rough, proved a great asset. But, it also proved a bit problematic, as a lot of things I liked to were gone from the web. Great ideas of others that sparked something in me were toast. They were not even in the Ineternet Archive’s Wayback Machine. Fortunately, I have a personal archive of things in my DevonThink Pro repository on my laptop that I’ve been tucking thing of potential future interest into since 2005. I have over 50,000 objects tucked away in it and it takes up between 20GB to 30GB on my hard drive.

I have a much larger post brewing on this, which I need to write and I’ve promised quite a few others I would write. The big problem in all of this is there is a lot of good, if not great, thinking gone from the web. It is gone because domains names were not kept, a site changed and dropped old content, blogging platforms disappeared (or weren’t kept up), or people lost interest and just let everything go. The great thinking from the 90s that the web was going to be a repository for all human thinking with great search and archival properties, is pretty much B.S. The web is fragile and not so great at archiving for long stretches. I found HTML is good for capturing content, but PDF proved the best long term (10 years = long term) digital archive for search in DevonThink. The worst has been site with a lot of JavaScript not saved into PDF, but saved as a website. JavaScript is an utter disaster for archiving (I have a quite a few things I tucked away as recently as 18 months ago that are unreadable thanks to JavaScript (older practices and modifications which may be deemed security issues or other changes of mind have functional JavaScript stop working). The practice of putting everything on the web, which can mean putting application front ends and other contrivances up only are making the web far more fragile.

The best is still straight up HMTL and CSS and enhancing from there with JavaScript. The other recent disaster, which is JavaScript related, is infinite scroll and breaking distinct URLs and pages. Infinite scroll is great for its intended use, which is stringing crappy content in one long string so advertisers see many page views. It is manufacturing a false understanding that the content is valued and read. Infinite scroll has little value to a the person reading, other than if the rare case good content is strung together (most sites using infinite scroll do it because the content is rather poor and need to have some means of telling advertisers that they have good readership). For archival purposes most often capturing just the one page you care about gets 2 to 5 others along with it. Linking back to the content you care about many times will not get you back to the distinct article or page because that page doesn’t actually live anywhere. I can’t wait for this dim witted practice to end. The past 3 years or so of thinking I had an article / page of good content I could point to cleanly and archive cleanly was a fallacy if I was trying to archive in the playland of infinite scroll cruft.

Back to Writing Out Loud

This past year of trying to dig out the relatively recent past of 5 to 10 years with some attempts to go back farther reinforced the good (that may be putting it lightly) practice of writing out loud. In the past few years I have been writing a lot still. But, much of this writing has been in notes on my local machines, my own shared repositories that are available to my other devices, or in the past couple years Slack teams. I don’t tend to write short tight pieces as I tend to fill in the traces back to foundations for what I’m thinking and why. A few of the Slack teams (what Slack calls the top level cluster of people in their service) get some of these dumps. I may drop in a thousand or three words in a day across one to four teams, as I am either conveying information or working through an idea or explanation and writing my way through it (writing is more helpful than talking my way through it as I know somebody is taking notes I can refer back to).

A lot of the things I have dropped in not so public channels, nor easily findable again for my self (Slack is brilliantly searchable in a team, but not across teams). When I am thinking about it I will pull these brain dumps into my own notes system that is searchable. If they are well formed I mark them as blogfodder (with a tag as such or in a large outline of other like material) to do something with later. This “do something with later” hasn’t quite materialized as of yet.

Posting these writing out loud efforts in my blogs, and likely also into my Medium area as it has more constant eyes on it than my blogs these days. I tend to syndicate out finished pieces into LinkedIn as well, but LinkedIn isn’t quite the space for thinking out loud as it isn’t the thinking space that Medium or blogs have been and it doesn’t seem to be shifting that way.

Not only have my own resources been really helpful, but in digging through expert witness work I was finding blogs to be great sources for really good thinking (that is where really good thinking was done, this isn’t exactly the case now, unless you consider adding an infinitely redundant cat photo to a blog being really good thinking). A lot of things I find valuable still today are on blogs and people thinking out loud. I really enjoy David Weinberger, Jeremy Keith, and the return of Matt Webb to blogging. There are many others I read regularly (see my links page for more).



December 31, 2010

Closing Delicious? Lessons to be Learned

There was a kerfuffle a couple weeks back around Delicious when the social bookmarking service Delicious was marked for end of life by Yahoo, which caused a rather large number I know to go rather nuts. Yahoo, has made the claim that they are not shutting the service down, which only seems like a stall tactic, but perhaps they may actually sell it (many accounts from former Yahoo and Delicious teams have pointed out the difficulties in that, as it was ported to Yahoo’s own services and with their own peculiarities).

Redundancy

Never the less, this brings-up an important point: Redundancy. One lesson I learned many years ago related to the web (heck, related to any thing digital) is it will fail at some point. Cloud based services are not immune and the network connection to those services is often even more problematic. But, one of the tenants of the Personal InfoCloud is it is where you keep your information across trusted services and devices so you have continual and easy access to that information. Part of ensuring that continual access is ensuring redundancy and backing up. Optimally the redundancy or back-up is a usable service that permits ease of continuing use if one resource is not reachable (those sunny days where there's not a cloud to be seen). Performing regular back-ups of your blog posts and other places you post information is valuable. Another option is a central aggregation point (these are long dreamt of and yet to be really implemented well, this is a long brewing interest with many potential resources and conversations).

With regard to Delicious I’ve used redundant services and manually or automatically fed them. I was doing this with Ma.gnol.ia as it was (in part) my redundant social bookmarking service, but I also really liked a lot of its features and functionality (there were great social interaction design elements that were deployed there that were quite brilliant and made the service a real gem). I also used Diigo for a short while, but too many things there drove me crazy and continually broke. A few months back I started using Pinboard, as the private reincarnation of Ma.gnol.ia shut down. I have also used ZooTool, which has more of a visual design community (the community that self-aggregates to a service is an important characteristic to take into account after the viability of the service).

Pinboard has been a real gem as it uses the commonly implemented Delicious API (version 1) as its core API, which means most tools and services built on top of Delicious can be relatively easily ported over with just a change to the URL for source. This was similar for Ma.gnol.ia and other services. But, Pinboard also will continually pull in Delicious postings, so works very well for redundancy sake.

There are some things I quite like about Pinboard (some things I don’t and will get to them) such as the easy integration from Instapaper (anything you star in Instapaper gets sucked into your Pinboard). Pinboard has a rather good mobile web interface (something I loved about Ma.gnol.ia too). Pinboard was started by co-founders of Delicious and so has solid depth of understanding. Pinboard is also a pay service (based on an incremental one time fee and full archive of pages bookmarked (saves a copy of pages), which is great for its longevity as it has some sort of business model (I don’t have faith in the “underpants - something - profit” model) and it works brilliantly for keeping out spammer (another pain point for me with Diigo).

My biggest nit with Pinboard is the space delimited tag terms, which means multi-word tag terms (San Francisco, recent discovery, etc.) are not possible (use of non-alphabetic word delimiters (like underscores, hyphens, and dots) are a really problematic for clarity, easy aggregation with out scripting to disambiguate and assemble relevant related terms, and lack of mainstream user understanding). The lack of easily seeing who is following my shared items, so to find others to potentially follow is something from Delicious I miss.

For now I am still feeding Delicious as my primary source, which is naturally pulled into Pinboard with no extra effort (as it should be with many things), but I'm already looking for a redundancy for Pinboard given the questionable state of Delicious.

The Value of Delicious

Another thing that surfaced with the Delicious end of life (non-official) announcement from Yahoo was the incredible value it has across the web. Not only do people use it and deeply rely on it for storing, contextualizing links/bookmarks with tags and annotations, refinding their own aggregation, and sharing this out easily for others, but use Delicious in a wide variety of different ways. People use Delicious to surface relevant information of interest related to their affinities or work needs, as it is easy to get a feed for not only a person, a tag, but also a person and tag pairing. The immediate responses that sounded serious alarm with news of Delicious demise were those that had built valuable services on top of Delicious. There were many stories about well known publications and services not only programmatically aggregating potentially relevant and tangential information for research in ad hoc and relatively real time, but also sharing out of links for others. Some use Delicious to easily build “related information” resources for their web publications and offerings. One example is emoted by Marshall Kirkpatrick of ReadWriteWeb wonderfully describing their reliance on Delicious

It was clear very quickly that Yahoo is sitting on a real backbone of many things on the web, not the toy product some in Yahoo management seemed to think it was. The value of Delicious to Yahoo seemingly diminished greatly after they themselves were no longer in the search marketplace. Silently confirmed hunches that Delicious was used as fodder to greatly influence search algorithms for highly potential synonyms and related web content that is stored by explicit interest (a much higher value than inferred interest) made Delicious a quite valued property while it ran its own search property.

For ease of finding me (should you wish) on Pinboard I am http://pinboard.in/u:vanderwal

----

Good relevant posts from others:



August 20, 2007

Why Ma.gnolia is One of My Favorite Social Bookmarking Tools

After starting the Portable Social Network Group in Ma.gnolia yesterday I received a few e-mails and IMs regarding my choice. Most of the questions were why not just use tags and del.icio.us. After I posted my Ma.Del Tagging Bookmarklet post I have had a lot of questions about Ma.gnolia and my preference as well as people thought I was not a fan of it. I have been thinking I would blog about my usage, but given my work advising on social bookmarking and social web, I shy away talking about what I use as what I like is likely not what is going to be a good fit for others. But, my work is one of the reasons I want to talk about what I like using as nearly every customer of mine and many presentation attendees look at del.icio.us first (it kicked the door wide open with a tool that was light years ahead of all others), but it is not for everybody and there are many other options. Much of my work is with enterprise and organizations of various size, which del.icio.us is not right for them for privacy reasons. I still add to del.icio.us along with my favorite as there are many people that have subscribed to the at feed as they derive value from that subscription so I take the extra step to keep that feed as current.

Ma.gnolia Offers Great Features for Sociality

I have two favorite tools for my own personal social bookmarking reasons Ma.gnolia and Clipmarks (I don't think I have anything publicly shared in Clipmarks). First the later, I use Clipmarks primarily when I only want to bookmark a sub-page element out on the web, which are paragraphs, sentences, quotes, images, etc.

I moved to try Ma.gnolia again last Fall when something changed in del.icio.us search and the results were not returning things that were in del.icio.us. My trying Ma.gnolia, by importing all of my 2200 plus bookmarks not only allowed me to search and find things I wanted, but I quickly became a fan of their many social features. In the past year or less they have become more social in insanely helpful and kind ways. Not only does Ma.gnolia have groups that you can share bookmarks with but there is the ability to have discussions around the subject in those groups. Sharing with a group is insanely easy. Groups can be private if the manager wishes, which makes it a good test ground for businesses or other organizations to test the social bookmarking waters. I was not a huge fan of rating bookmarks as if I bookmarked something I am wanting to refind it, but in a more social context is has value for others to see the strength of my interest (normall 3 to 5 stars). One of my favorite social features is giving "thanks", which is not a trigger for social gaming like Digg, but is an interpersonal expression of appreciation that really makes Ma.gnolia a friendly and positive social environment.

Started with Beauty, but Now with Ease

Ma.gnolia started as a beautiful del.icio.us (it was not the first) and the beauty got in the way of usability for many. But, Ma.gnolia has kept the beautiful strains and added simple ease of use in a very Apple delightful moments sort of way. The thanks are a nice treat, but the latest interactions that provide non-disruptive ease of use to accomplish a task, without completely taking you away from your previous flow (freaking brilliant in my viewpoint - anything that preserves flow to accomplish a short task is a great step). Another killer feature is Ma.gnolia Roots, which is a bookmarklet that when clicked hovers a semi-transparent layer over the webpage to show information from Ma.gnolia about that page (who has linked to it, tags, annotations, etc.) and makes it really easy to bookmark that page from that screen. The API (including a replica of the del.icio.us API that nearly all services use as the standard), add-ons, Creative Commons license for your bookmarks, many bookmarklet options, and feed options. But, there are also the little things that are not usually seen or noticed, such as great URLs that can be easily parsed, all pages are properly marked up semantically, and Microformats are broadly and properly used throughout the site (nearly at every pivot).

Intelligently Designed

For me Ma.gnolia is not only a great site to look at, a great social bookmarking site that is really social (as well as polite and respectful of my wishes), but a great example for semantic web mark-up (including microformats). There is so much attention to detail in the page markup that for those of us that care it is amazingly beautiful. The visual layer can be optimized for more white space and detail or for much easier scrolling. The interactions, ease of use, and delightful moments that assist you rather than taking you out of your flow (workflow, taskflow, etc.) and make you ask why all applications and social sites are not this wonderful.

Ma.gnolia is not perfect as it needs some tools to better manage and bulk edit your own bookmarks. It could use a sort on search items (as well as narrow by date range). Search could use some RedBull at times. It could improve with filtering by using co-occurance of tag terms as well as for disambiguation.

Overall for me personally, Ma.gnolia is a tool I absolutely love. It took the basic social bookmarking idea in del.icio.us and really made it social. It has added features and functionality that are very helpful and well executed. It is an utter pleasure to use. I can not only share things easily and get the wonderful effects of social interaction, but I can refind things in my now 2,500 plus bookmarks rather easily.



February 2, 2007

Stikkit Adds an API

Stikkit has finally added an API for Stikkit. This makes me quite happy. Stikkit has great ease of information entry and it is perfect for adding annotations to web-based information.

Stikkit is My In-line Web Triage

I have been using Stikkit, from the bookmarklet, as my in-line web information triage. If I find an event or something I want to come back to latter (other than to read and bookmark) I pop that information into Stikkit. Most often it is to remind me of deadlines, events, company information, etc. I open the Stikkit bookmarklet and add the information. The date information I add is dumped into my Stikkit calendar, names and addresses are put into the Stikkit address book, and I can tag them for context for easier retrival.

Now with the addition of the API Stikkit is now easy to retrieve a vCard, ical, or other standard data format I can drop into my tools I normally aggregate similar information. I do not need to refer back to Stikkit to copy and paste (or worse mis-type) into my work apps.

I can also publish information from my preferred central data stores to Stikkit so I have web access to events, to dos, names and addresses, etc. From Stikkit I can then share the information with only those I want to share that information with.

Stikkit is growing to be a nice piece for microcontent tracking in my Personal InfoCloud.



July 28, 2006

Pixelating Pleasure

For your Friday pleasure, there is a special treat for you pixel people lovers at Iconfactory. It is time to take out the tables, well maybe past time, but at least they are doing it. [hat tip to Brian]



May 25, 2006

Developing the Web for Whom?

Google Web Developer Toolkit for the Closed Web

Andrew in his post "Reading user interface libraries" brings in elements of yesterday's discussion on The Battle to Build the Personal InfoCloud. Andrew brings up something in his post regarding Google and their Google Web Developer Toolkit (GWT. He points out it is in Java and most of the personal web (or new web) is built in PHP, Ruby [(including Ruby on Rails), Python, and even Perl].

When GWT was launched I was at XTech in Amsterdam and much of the response was confusion as to why it was in Java and not something more widely used. It seems that by choosing Java for developing GWT it is aiming at those behind the firewall. There is still much development on the Intranet done in Java (as well as .Net). This environment needs help integrating rich interaction into their applications. The odd part is many Intranets are also user-experience challenged as well, which is not one of Google's public fortés.

Two Tribes: Inter and Intra

This whole process made me come back to the two differing worlds of Internet and Intranet. On the Internet the web is built largely with Open Source tools for many of the big services (Yahoo, Google, EBay, etc.) and nearly all of the smaller services are Open Source (the cost for hosting is much much lower). The Open Source community is also iterating their solutions insanely fast to build frameworks (Ruby on Rails, etc.) to meet ease of development needs. These sites also build for all operating systems and aim to work in all modern browsers.

On the Intranet the solutions are many times more likely to be Java or .Net as their is "corporate" support for these tools and training is easy to find and there is a phone number to get help from. The development is often for a narrower set of operating systems and browsers, which can be relatively easy to define in a closed environment. The Google solution seems to work well for this environment, but it seems that early reaction to its release in the personal web it fell very flat.

13 Reasons

A posting about Top 13 reasons to CONSIDER the Microsoft platform for Web 2.0 development and its response, "Top 13 reasons NOT to consider the Microsoft platform for Web 2.0 development" [which is on a .Net created site] had me thinking about these institutional solutions (Java and .Net) in an openly developed personal web. The institutional solutions seem like they MUST embrace the open solutions or work seamlessly with them. Take any one of the technical solutions brought up in the Microsoft list (not including Ray Ozzie or Robert Scoble as technical solutions) and think about how it would fit into personal site development or a Web 2.0 developed site. I am not so sure that in the current state of the MS tools they could easily drop in with out converting to the whole suite. Would the Visual .Net include a Python, PHP, Ruby, Ruby On Rails, or Perl plug-in?The Atlas solution is one option in now hundreds of Ajax frameworks. To get use the tools must had more value (not more cost or effort) and embrace what is known (frogs are happy in warm water, but will not enter hot water). Does Atlas work on all browsers? Do I or any Internet facing website developer want to fail for some part of their audience that are using modern browsers?

The Web is Open

The web is about being browser agnostic and OS agnostic. The web makes the OS on the machine irrelevant. The web is about information, media, data, content, and digital objects. The tools that allow us to do things with these elements are increasingly open and web-based and/or personal machine-based.

Build Upon Open Data and Open Access

The web is moving to making the content elements (including the microconent elements) open for use beyond the site. Look at the Amazon Web Services (AWS) and the open APIs in the Yahoo Developer Network. Both of these companies openly ease community access and use of their content and services. This draws people into Amazon and Yahoo media and properties. What programming and scripting languages are required to use these services? Any that the developer wants.. That is right, unlike Google pushing Java to use their solution, Amazon and Yahoo get it, it is up to the developer to use what is best for them. What browsers do the Amazon and Yahoo solutions work in? All browsers.

I have been watching Microsoft Live since I went to Search Champs as they were making sounds that they got it too. The Live Clipboard [TechCrunch review] that Ray Ozzie gave at O'Reilly ETech is being developed in an open community (including Microsoft) for the whole of the web to use. This is being done for use in all browsers, on all operating systems, for all applications, etc. It is open. This seems to show some understanding of the web that Microsoft has not exhibited before. For Microsoft to become relevant, get in the open web game, and stay in the game they must embrace this approach. I am never sure that Google gets this and there are times where I am not sure Yahoo fully gets it either (a "media company" that does not support Mac, which the Mac is comprised of a heavily media-centric community and use and consume media at a much higher rate than the supported community and the Mac community is where many of the trend setters are in the blogging community - just take a look around at SXSW Interactive or most any other web conference these days (even XTech had one third of the users on Mac).

Still an Open Playing Field

There is an open playing field for the company that truly gets it and focusses on the person and their needs. This playing field is behind firewalls on Intranet and out in the open Internet. It is increasingly all one space and it continues to be increasingly open.



May 23, 2006

More XTech 2006

I have had a little time to sit back and think about XTech I am quite impressed with the conference. The caliber of presenter and the quality of their presentations was some of the best of any I have been to in a while. The presentations got beneath the surface level of the subjects and provided insight that I had not run across elsewhere.

The conference focus on browser, open data (XML), and high level presentations was a great mix. There was much cross-over in the presentations and once I got the hang that this was not a conference of stuff I already knew (or presented at a level that is more introductory), but things I wanted to dig deeper into. I began to realize late into the conference (or after in many cases) that the people presenting were people whose writting and contributions I had followed regularly when I was doing deep development (not managing web development) of web applications. I changed my focus last Fall to get back to developing innovative applications, working on projects that are built around open data, and that filled some of the many gaps in the Personal InfoCloud (I also left to write, but that did get side tracked).

As I mentioned before, XTech had the right amount of geek mindset in the presentations. The one that really brought this to the forefront of my mind was on XForms, an Alternative to Ajax by Erik Bruchez. It focussed on using XForms as a means to interact with structured data with Ajax.

Once it dawned on me that this conference was rather killer and I sould be paying attention to the content and not just those in the floating island of friends the event was nearly two-thirds the way through. This huge mistake on my part was the busy nature of things that lead up to XTech, as well as not getting there a day or two earlier to adjust to the time, and attend the pre-conference sessions and tutorials on Ajax.

I was thrilled ot see the Platial presentation and meet the makers of the service. When I went to attend Simon Willison's presentation rather than attending the GeoRSS session, I realized there was much good content at XTech and it is now one on my must attend list.

As the conference was progressing I was thinking of all of the people that would have really benefitted and enjoyed XTech as well. A conference about open data and systems to build applications with that meet real people's needs is essential for most developers working out on the live web these days.

If XTech sounded good this year in Amsterdam, you may want to note that it will be in Paris next year.



January 21, 2006

Changing the Flow of the Web and Beyond

In the past few days of being wrapped up in moving this site to a new host and client work, I have come across a couple items that have similar DNA, which also relate to my most recent post on the Come to Me Web over at the Personal InfoCloud.

Sites to Flows

The first item to bring to light is a wonderful presentation, From Sites to Flows: Designing for the Porous Web (3MB PDF), by Even Westvang. The presentation walks through the various activities we do as personal content creators on the web. Part of this fantastic presentation is its focus on microcontent (the granular content objects) and its relevance to context. Personal publishing is more than publishing on the web, it is publishing to content streams, or "flows" as Even states it. These flows of microcontent have been used less in web browsers as their first use, but consumed in syndicated feeds (RDF, RSS/Atom, Trackback, etc.). Even moves to talking about Underskog, a local calendaring portal for Oslo, Norway.

The Publish/Subscribe Decade

Salim Ismail has a post about The Evolution of the Internet, in which he states we are in the Publish/Subscribe Decade. In his explanation Salim writes:

The web has been phenomonally successful and the amount of information available on it is overwhelming. However, (as Bill rightly points out), that information is largely passive - you must look it up with a browser. Clearly the next step in that evolution is for the information to become active and tell you when something happens.

It is this being overwhelmed with information that has been of interest to me for a while. We (the web development community) have built mechanisms for filtering this information. There are many approaches to this filtering, but one of them is the subscription and alert method.

The Come to Me Web

It is almost as if I had written Come to Me Web as a response or extension of what Even and Salim are discussing (the post had been in the works for many weeks and is an longer explanation of a focus I started putting into my presentations in June. This come to me web is something very few are doing and/or doing well in our design and development practices beyond personal content sites (even there it really needs a lot of help in many cases). Focussing on the microcontent chunks (or granular content objects in my personal phraseology) we can not only provide the means for others to best consume our information we are providing, but also aggregate it and provide people with better understanding of the world around them. More importantly we provide the means to best use and reuse the information in people's lives.

Important in this flow of information is to keep the source and identity of the source. Having the ability to get back to the origination point of the content is essential to get more information, original context, and updates. Understanding the identity of the content provider will also help us understand perspective and shadings in the microcontent they have provided.



July 22, 2005

Make Nice with Mobile Users Easily

Those interested in making friendly with their mobile users trying to consume their content aimed at the desktop browser market should take a peek at Make Your Site Mobile Friendly by Mike Davidson. This is one method that makes for a little less sweat and keeps some dollars in our budgets for other needs.



April 18, 2005

Adobe Buys Macromedia

Adobe buys Macromedia was not the news I wanted to wake up to this morning. My sole issue is competition, as with out these two competing there is little push to advance. This is not a huge surprise, as many rumors the last few years that Macromedia was on the block (most were expecting Microsoft to buy it and then spin out ColdFusion and the application tools to an outside buyer).

If this goes through, we have Dreamweaver/HomeSite as the dominant web development tool (it is making great strides towards standards compliant development and GoLive needs much more work), desktop publishing is Adobe only, market share of image editing and creation (Photoshop and Illustrator) go to Adobe, application development goes to Macromedia with ColdFusion, then we have the tough call with Flash and SVG. Flash is dominant, but SVG is open and Flash lite (for mobile) has really upset many developers as the player is up to the carrier and phone maker to deploy, not the content creator. This last step has really pushed many Flash developers away from Macromedia as they work to focus on mobile. Rumors that Adobe was working on a SVG mobile tool with open deployment had many developers for mobile really excited.

My hope would be for Macromedia customer service and pricing and Adobe Premium Suite with Dreamweaver and Flash thrown in for a well rounded package.



January 13, 2005

San Francisco Bound

I will be in San Francisco and surounding Bay Area on the 20th and 21st of January. There are many folks I would like to hang and chat with. I have been swamped with a handful of things the past couple weeks, along with a huge flood of spam mail (I think I have spam abated for the moment). Interested in talking blogs, folksonomy, Personal InfoCloud, Model of Attraction, mobile, interaction design, Web Standards, etc. please drop a note. Thursday evening may be the best option at the moment. Use the contact link above (needs JavaScript on) or send to thomas at this domain.



August 26, 2004

Microsoft Shows They Can Learn

Microsoft redesigns and takes a great step toward standards. Do they have everything right yet? No. Will they get there? They do not have far to go. They do need to fix the site to work better in Standards-based browsers that people are moving to. They need doctype and some other essentials, but at least they are showing they are learning.

One thing that stands out to to me is the lack of uppercase and mixed-case tags and attributes. This is huge as their tools that are in production for consumers do not do this. To date the Microsoft development tools fail the developers as they have not made it easy to output proper tag and attributes in the standards compliant case (for XHTML), which is lower case.

Thanks to Matt's write-up and Doug's write-up, which ties back to his own previous comments about thowing out tables.



August 25, 2004

A Wonderful Redesign

I need to give a pointer to one of the wonderful redesigns of late, Jeff Gates' Life Outtacontext is something I find wonderful. I have been enjoying it for a couple weeks now. I particularly like when I scroll to the bottom of the page. Jeff does not update his wonderful content frequently, but the design has me going back often.



Chevy Redesigns with Standards

Chevrolet has redesigned with fully valid (one minor issue in the style sheet) XHTML (strict) and CSS. It is beautiful and wonderfully functional. All the information can be easily copied and pasted to help the discerning car buyer build their own crib sheet. The left navigation (browsing structure) is wonderful and not a silly image, but a definition list that is expandable. The style layer is semantic, which is a great help also (for those IAs who understand). Those of you so inclined, take a look under the hood as there are many good things there.



August 23, 2004

Browse Happy

The Web Standards Project (WaSP) has launched (and will continue to sponsor) Browse Happy. Browse Happy is a site that focusses on web browser alternatives to Microsoft IE. Many computers come with IE installed, either as part of the operating system or as an arrangement with the producer of the operating system.

Over the years Microsoft has listened to complains about their browser's lack of standards compliance (no browser was doing this well at the time). They took a huge leap and built a browser that was much better at complying to the standards than others. This allowed the developers of sites and content to work to no longer build to each browser but build to one standards. IE at this point was not perfect, but it was so much better than it ever was and it truly allowed the developers to build to specifications and have it run well on standards compliant browsers. Nearly everybody loved Microsoft for their advancements.

Unfortunately Microsoft thought good was well enough and stopped IE development in 2001. It was not and is not fully standards compliant. On standards IE is now far behind nearly all the other browsers that are standards compliant that a developer must hack their perfectly valid code to get it to work properly in an IE browser. Sites that develop for IE have serious problems when viewed in other browsers, which is becoming more and more the trend as mobile devices take off and people are forced to replace IE because of security problems.

Enough about the poor little developer, it is the people who are the user of web browsers that should be the attention. Many have realized that are are better options than IE. As IE development stopped in 2001 (except for excessive security patches) the rest of the browser developers continued to make progress. As it is with any technology, if one stands still with development others will pass them and possibly make them irrelevant.

Other browsers have now passed IE on, not only standards compliance, but accessibility (do you have problems reading all sites because the type is too small, well nearly all other browsers let you easily change the type to make it larger and easier to read), render the pages faster (this makes the pages show up much faster on the screen), very few rendering bugs (all the content will show up on the page), better user experience with ease of use (tabbed browsing, pop-up blocking, etc.), and more secure (U.S. Department of Homeland Security has not warned people to stop using other browsers as the security problems, while they occasionally arise on other browsers, are not regular events that require people to continually update their browser).

Browser Happy is a collection of real people who have found the world of non-IE browsers and have found the painful experiences of IE are not needed. They have found the other browsers are very easy to load on their machine. These real people have also found they can return to the joy of browsing the web that they once knew in more innocent times.



August 19, 2004

Accessibility is Little More Than Web Best Practices

Today I gave my Accessibility is Little More Than Web Best Practices (124kb PDF) to the Adaptive Path User Experience Week 2004 DC attendees as a lunchtime presentation discussion. It was good to find folks that are in the DC area interested in Web Standards (a very big part of best practices) and figuring how to sell accessibility to their clients that are required by law to have accessible sites. This presentation is quite similar to my STC presentation, but has the addition of the few things that are required for accessibility that are not part of web best practices (these apply to tables and forms).



August 4, 2004

Naked Div and Span Tags Lead to Embarassment

A word to the wise, don't use naked div or span tags in your markup, as you are asking for trouble. Many validation tools will let you know you have messed up, but you will soon realize this as you start extending your design with CSS.

What is a naked div or span? Look in your markup and if you see <div> or <span> you have naked tags. A div or a span tag should always have an id or class attribute that defines what it is doing. Calling div or span in your CSS is one giant hint this are going wrong. Add CSS modifications to the semantic markup that must be in place and use an id or class to place all other presentation layers.

Sooner or later a class or id attribute will be dropped in the div or span and it may lose the intended value, but since the CSS and markup were not used correctly the headache begins. Naked div and span tags lead to embarrassment at best or headaches and cursing for those that have to clean up the mess.



July 19, 2004

Web Standards Opening

Are you looking to practice and hone your standards compliant web design craft? Are you looking for an environment that is Web Standards friendly and want to join solid Web development team? You now have found a possible match. Does your vernacular include: "Zeldman, Eric, Tantek, Bowman, Christopher, Shea, and/or Molly said..."? Are you looking to get recognized for your Standards work? Can you make Photoshop purr? Do you know the bugs in Dreamweaver's rendering engine? Can you live with just one table in your layout? Are you proud of your craft and want to hone it more?

If you answered yes and are looking for a change of scenery read the following and send me an e-mail (see contact above).

We are looking to hire a strong Web Designer who has strong experience with hand-coding Web Standards (HTML, XHTML, and CSS) that validate. The designer must also have experience with accessibility (Section 508) and have solid web graphic design skills. Experience with information architecture and user-centered design processes are very helpful (wireframes, usability testing, etc). Experience with leading design and redesign processes is very helpful. Strong communication skills, including design documentation is essential. We design with Dreamweaver and Homesite and use Adobe and Macromedia graphics applications. [INDUS Corporation Web Designer Job Listing]


July 16, 2004

Web Standards and IA Process Married

Nate Koechley posts his WebVision 2004 presentation on Web Standards and IA. This flat out rocks as it echos what I have been doing and refining for the last three years or more. The development team at work has been using this nearly exclusively for about couple years now on redesigns and new designs. This process makes things very easy to draft in simple wireframe. Then move to functional wireframes with named content objects in the CSS as well as clickable. The next step is building the visual presentation with colors and images.

This process has eased the lack of content problem (no content no site no matter how pretty one thinks it is) often held up by "more purple and make it bigger" contingents. This practice has cut down development and design time in more than half and greatly decreases maintenance time. One of the best attributes is the decreased documentation time as using the Web Developer Extension toolbar in Firefox exposes the class and id attributes that provide semantic structure (among many other things this great tool provides). When the structure is exposed documentation becomes a breeze. I can not think of how or why we ever did anything differently.



Best Web Development Practices

Those of you looking for a relatively short article or essay on current best Web practices should look no further than the Best Web Development Practices provided by Apple. Yes, this focusses on web standards, but what best practice does not as it is the cornerstone of accessibility as well as makes the same content usable on mobile devices (one caveat the article will not print on 8.5 by 11 inch paper).



April 1, 2004

Join the March for Web Standards

Get your umbrella and head to the Mall in Washington, DC today for the March for Web Standards (M4WeSt). It is a great cause and a couple hundred thousand are expected even in the rain.



February 16, 2004


December 8, 2003

WaSP interview with Todd Dominey

The Web Standards Project interviews Todd Dominey, who was behind the standards-based PGA redesign. The interview raises the problems Content Management Systems cause with valid markup. Todd also highlights it is much easier to move towards standards when working from scratch than cleaning up previously marked-up content.



December 2, 2003

Harpers redesigned

Harpers Magazine has been redesigned by Paul Ford. Paul discusses the Harpers redesign on his own site Ftrain.

The site is filled with all the good stuff we love, valid XHTML, CSS, accessible content (meaning well structured content). The site is clean and highlights the content, which is what Harpers is all about - great content. The site is not overfilled with images and items striking out for your attention, it is simply straightforward.

We bow down before Paul and congratulate him on a job very well done.



October 18, 2003

Info Cloud and Personal Info Cloud weblogs setup

We have set up a couple new sites using TypePad to focus on Info Clouds and more directly, the Personal Info Cloud. The Info Cloud and Personal Info Cloud are extensions of ideas that came out of the Model of Attraction work.

The information posted on the TypePad sites will most likely be syndicated here, or vis versa. The use of TypePad is easing the need to have a separate location for these ideas and works in progress. Off the Top will not be changing, it will still be a melting pot of ideas and information. Direct access to more focussed information on topic or categories are still available by clicking on the category below each entry or using the category list.

The information cloud work ties directly to standards, information architecture, content management, and general Web development passions that drive me.



August 27, 2003

Kottke and others on standards and semanticsk

Kottke provides a good overview of Web standards and semantically correct site development. Jason points out, as many have, that just because a site validates to the W3C does not mean that it is semantically correct. Actually there are those that take umbrage with the use of the term semantically for (X)HTML, when many consider it structural tagging of the content instead, but I digress. A "valid" site could use a div tag where it should not have, for example where it should have been a paragraph tag instead. Proper structural markup is just important as valid markup. The two are not mutually exclusive, in fact they are very good partners.

One means to marking-up a page is to begin with NO tags on the page in a text editor then markup the content items based on what type of content they are. A paragraph gets a "p" tag, tabular data is placed in a table, a long quote is put in a "blockquote" tag, an ordered list gets "ol" tags surrounding them with items in the list getting wrapped with "li" tags, and so forth. Using list tags to indent content can be avoided in using this method. Once the structure has been properly added to the document it is time to work with the CSS to add presentation flair. This is not rocket science and the benefits are very helpful in transitioning the content to handheld devices and other uses. The information can more easily scraped for automated purposes too if needed.

It is unfortunate that many manufacturers of information tools do not follow this framework when transforming information in to HTML from their proprietary mirth. A MS Word document creates horrible garbage that is both non-structural and not valid. The Web is a wonderful means to share content, but mangled markup and no structure can render information inconsistent at best, if not useless.

While proper development is not rocket science, it does take somebody who knows what they are doing, and not guessing, to get it right.

Others are posting on Jason's post, like Doug Bowman and Dave Shea and have opened up comments. The feedback in Doug's comments is pretty good.



July 20, 2003

Bray on browsers and standards support

Tim Bray has posted an excellent essay on the state of Web browsers, which encompasses Netscape dropping browser development and Microsoft stopping stand alone browser development (development seemingly only for users MSN and their next Operating System, which is due out in mid-2005 at the earliest).

Tim points out users do have a choice in the browsers they choose, and will be better off selecting a non-Microsoft browser. Tim quotes Peter-Paul Koch:

[Microsoft Internet] Explorer cannot support today's technology, or even yesterday's, because of the limitations of its code engine. So it moves towards the position Netscape 4 once held: the most serious liability in Web design and a prospective loser.

This is becoming a well understood assessment from Web designers and application developers that use browsers for their presentation layer. Developers that have tried moving to XHTML with table-less layout using CSS get the IE headaches, which are very similar to Netscape 4 migraines. This environment of poor standards compliance is a world many Web developers and application developers have been watching erode as the rest of the modern browser development firms have moved to working toward the only Web standard for HTML markup.

Companies that develop applications that can output solid standards compliant (X)HTML are at the forefront of their fields (see Quark). The creators of content understand the need not only create a print version, but also digitally accessible versions. This means that valid HTML or XHTML is one version. The U.S Department of Justice, in its Accessibility of State and Local Government Websites to People with Disabilities report advises:

When posting documents on the website, always provide them in HTML or a text-based format (even if you are also providing them in another format, such as Portable Document Format (PDF)).

The reason is that HTML can be marked-up to provide information to various applications that can be used by those that are disabled. The site readers that read (X)HTML content audibly for those with visual disabilities (or those having their news read to them as they drive) base their tools on the same Web standards most Web developers have been moving to the past few years. Not only to the disabled benefit, but so do those with mobile devices as most of the mobile devices are now employing browsers that comprehend standards compliant (X)HTML. There is no need to waste money on applications that create content for varied devices by repurposing the content and applying a new presentation layer. In the digital world (X)HTML can be the one presentation layer that fits all. It is that now.

Tim also points to browser options available for those that want a better browser.



June 23, 2003

ODBC on Apple Jaguar

ODBC in Apple Jaguar to help share data between applications.



June 20, 2003

Steve Champeon on the Future of Web Design

Steve Champeon on Progressive Enhancement and the Future of Web Design. This is almost like sitting with Steve and getting the background and how that reflects for future of markup and Web design directly from Steve.



June 13, 2003

Zeldman's DWwS is a can't put down book for many

Today in my short drive to the Metro (about a mile) I saw two folks walking with Jeffery Zeldman's Designing With Web Standard in hand. One of these folks was walking and reading it. I wanted to reach into my backseat and get my copy to hold up and honk (not a good safety move so I held back by show of oneness).

I personally think this book rocks. This book helps prove I am sane as there are many discussions at work that this book will easily help support the decisions we made to incorporate standards-based Web development. We do not have a user base that permits the use of full XHTML and CSS2, like this site, but it has made maintenance of pages (45,000 to 55,000 pages in all with 8,000 or more done while moving to standards based validation or actually validating).

Jeffery does a wonderful job writing about the whys and hows of Standards based development and design. He also make understanding the benefits very easy to grasp.

This may be the one starter book for Web developers to help them sell Standards-based development or to learn why they should be embracing it and moving forward with learning and using it.



June 6, 2003

Testing HTML validation of output of tools

Knopf offers a componarison of how well Help Authoring Tools create HTML. The testing includes compactness of code, but even better is validating the output against the W3C. Dreamweaver MX does quite well in the testing. It would be good to expand the testing to some of the other tools, like FrontPage and GoLive.



May 29, 2003

CSS and Microsoft's poor excuse for a browser

Tim Bray adds to the Microsoft IE is garbage chant that has been spreading around the Web developer community for some time. Oddly, until I think of Tantek, the IE browser on Mac is far more compliant. The font sizing issues that Tim discusses are largely only a problem on Windows version of IE browsers. Most other modern browsers (Mozilla (including its Netscape 6 and 7 variants), Opera, Safari, etc. all resize fonts even if the fonts are set in pixels.

In the accessibility community having a fixed pixel size has been taboo for some time. As I talk with more people with vision problems I find most do not use Windows IE browser to view sites, but choose one of the other modern browsers as they allow easy scaling of fonts (some like Opera even scale images). This seems to be a trait across the visually challenged users. Most users with visual difficulties have a strong dislike for the Microsoft browser just on this point alone. A few have mentioned they really like Mozilla browsers as they can easily change the skin on the browser to make the buttons and other elements more visible.

Me, I can read Tim's site just fine, which is ideal as Tim understands the problems and knows where the blame should reside.

Note: The MS IE browser on Windows shows its downfalls to those that are trying to us modern Web development techniques by using CSS layouts rather than table layouts for their work. As Web developers learn tableless layout is a pain to learn initially, largely because of IE 6 and lower do not follow the rules properly. To get Windows IE to render properly one has to hack the valid CSS to get the browser to render the page as does a browser that follows the standards. The irony is Microsoft claims to own the CSS patent.



May 20, 2003

The bells are ringing

Great news from the Carrie Bickner and Jeffrey Zeldman camps. These two are getting married. We wish them all the best and much much more.



May 14, 2003

Building with Web Standards or how Zeldman got the future now

I awaiting Jeffrey Zeldman's Designing with Web Standards, which is available for order from Amazon (Designing with Web Standards). I have been a believer in designing with Web Standards for years, but it was Jeffrey that pushed me over the edge to evangelist for Web standards. One of the best things going for Web standards is it make validation of markup easy, which is one of the first steps in making a Web site accessible.

I work in an environment that requires Web standard compliance as it provides information to the public as a public good. Taxpayers have coughed up their hard earned dollars to pay for research and services, which are delivered to them on the Web. The public may access information from a kiosk in an underfunded library with a donated computer on a dial-up connection, but they can get to information that they are seeking. The user may be disabled and relying on assistive technology to read the public information. The user may be tracking down information from a mobile device as they are travelling across country on their family vacation. Each of these users can easily get the public information they are seeking from one source, a standard compliant Web page.

Every new page that is developed by the team I am on validates to HTML 4.01 transitional. Why 4.01 transitional and not XHTML? We support older browsers and 4.01 transitional seems to have pretty good access to information no matter the browser or device. We are not on the cutting edge, but we know nearly everybody can get the information their tax dollars have paid for. I dream of a day job building XHTML with full CSS layout, but with the clients I work for we still aim at the public good first.

I am very happy that Jeffrey has his book coming out as it should bring to light to more developers what it means to build to Web standards. Every contract that is signed buy the agency I work for must validate to HTML 4.01 transitional, but very few of the sites do when they come through the door to be posted. We provide a lot of guidance to help other developers understand, but finding a solid foundation to work upon is tough. When hiring folks many claim to have experience building valid sites, but most soon realize they never have to the degree to getting a W3C validation.

Building our pages to 4.01 does not mean we are going to stick with 4.01 forever. We plan for XHTML by closing all tags and stay away from tags deprecated in 4.01 strict. Much of what we create only needs a few scripts run to convert the pages from HTML to XHTML 1.1 transitional. Having the closing tags makes scripting to find information and search and replace much easier. (Enough for now, buy the book, we will have more later).



April 16, 2003

Using HTML tags properly to help external search results

There are some essentials to building Web pages that get found with external search engines. Understanding the tags in HTML and how they are (rather should be) used is important. The main tags for most popular search engines are the title, heading (h1, h2, etc), paragraph (p), and anchor (a). Different search engines have given some weight in their ranking to metatags, but most do not use them or have decreased their value.

Google gives a lot of weight to the title tag, which is often what shows in the link Google gives its user to click for the entry. In the title tag the wording is important too, as the most specific information should be toward the front. A user searching for news may find a weblog toward the top of the search ahead of CNN, as CNN puts its name ahead of the title of the article. A title should echo the contents of the page as that will help the ranking of the pages, titles that are not repeated can get flagged for removal from search engines.

The headings help echo what is in the title and provide breaking points in the document. Headings not only help the user scan the page easily, but also are used by search engines to ensure the page is what it states it is. The echoing of terms are used to move an entry to the top of the rankings as the mechanical search engines get reinforcement that the information is on target for what its users may be seeking.

The paragraph tags also are used to help reinforce the text within them.

The anchor tags are used for links and this is what the search engines use to scrape and find other Web pages. The text used for the links is used by the search engines to weight their rankings also. If you want users to find information deep in your site put a short clear description between the anchor tags. The W3C standards include the ability to use a title attribute which some search tools also use. The title attribute is also used by some site readers (used by those with visual difficulties and those who want their information read aloud to them, because they may be driving or have their hands otherwise occupied) to replace the information between the anchor tags or to augment that information.

Example

The application I built to manage this weblog section is build to use each of these elements. This often results in high rankings in Google (and relatedly Yahoo), but this is not the intent, I am just a like fussy in that area. It gets to be very odd when my posting weblog posting review of a meal at Ten Penh is at the top or near the top of a Google Ten Penh search. The link for the Ten Penh restaurant is near the bottom of the first page.

Why is the restaurant not the top link? There are a few possible reasons. The restaurant page has its name at "tenpenh" in the title tag, which is very odd or sloppy. The page does not contain a heading tag nor a paragraph tag as the site is built with Flash. The semantic structure in Flash, for those search engines that scrape Flash. Equally the internal page links are not read by a search engine as they are in Flash also. A norm for many sites is having the logo of the site in the upper left corner clickable to the home page of the site, which with the use of the alt attribute in a image tag within an anchor link allow for each page to add value to the home page rant (if the alt attritute would have "Ten Penh Home" for example).

Not only does Flash hinder the scapeing of information the use of JavaScript links wipes out those as means to increase search rankings. Pages with dynamic links that are often believed to ease browsing (which may or may not prove the case depending on the site's users and the site goals in actual user testing) hurt the information in the site for being found by external search engines. JavaScript is not scrapable for links or text written out by JavaScript.



April 7, 2003

Meet the Makers chats with Steve Champeon

Meet the Makers chats up Steve Champeon. Steve is one of the founders of WaSP, has written and edited many tech books, and just flat out rocks. You are still saying who? Go read.



March 2, 2003

Me with Japanese characters and others

I have wild and adventurous dreams, but my subconscious did not consider seeing my name on a page of Japanese characters. I have had The Vapor's Turning Japanese running through my head since I saw this.



February 19, 2003

WaSP Buzzing

The current Buzz is that the Web Standards Project is growing and offering a new perspective and as is noted in the WaSP press release I am now a member of the WaSP clan. This takes advantage of what I already do in my free time, try to build a better Web and work on structuring information for use and reuse. This is smart group dovetails very nicely with the smart group of information architects that absorb another chunk of my free time.



January 27, 2003

Apple Word Replacement Rumor and Information Structure Dreams

Rumor has it Apple is working on MS Word replacement. This would be a great thing if it would read native Word files seemlessly, but even better would be turning out valid HTML/XHTML. MS Word has always made a huge mess of our information with its conversion to something it "calls" HTML, it is not even passable HTML. One could not get a job using what Microsoft outputs as HTML as a work sample, heck, it would not even pass the laugh test and it may get somebody fired.

One of the downsides of MS Office products is that they are created for styling of information not marking up information with structure, to which style can hang. MS Word allows people (if the turn on or keep the options turned on) to create information sculptures with structure and formatting of the information. What Word outputs to non-Word formats is an information blob that has lost nearly all of its structure and functionality in any other format. It does not really have the format the Word document to begin with. What Web developers do is put the structure back into the information blob to recreate an information sculpture again.

You ask why is structure important? Structure provides the insight to know what is a header and sub-header. Structure provides the ability to discern bulleted lists and outlines. Structure makes it script-kiddie easy to create a table of contents. Structure makes micro-content accessible and easier to find with search. Structure provides better context. Structure provides the ability to know what is a quote from an external document and point to it easily. Structure provides ease of information portability and mobile access easier. These just name a few uses of structure.

Does MS Word have this structure capability? Yes, do people use it? No really. If people use it does MS Word keep the structure? Rarely, as it usually turns the structure into style. This is much like a somebody who spent months in the gym to build a well defined physique only to have the muscles removed to stuff their own shirt with tissue paper to give it the look of being in shape. Does the person with the tissue paper muscles have the ability to perform the same as the person who is really in shape? Not even close.

Structure is important not only for the attributes listed above, but also for those people that have disabilities and depend on the information being structured to get the same understanding as a person with out disabilities. You say MS Word is an accessible application, you are mostly correct. Does it create accessible information documents? Barely at best. The best format for information structure lay in HTML/XHTML/XML not in styles.

One current place that structure is greatly valuable is Internet search. Google is the top search engine on the Internet. Google uses the text in hyperlinks, the information in title tags, and information in the heading tags to improve the findability of a Web page. What are these tagged elements? Structure.

One of the nice things about a valid HTML/XHTML Web document is I can see it aqnd use it on my cell phone or other mobile devices. You can navigate without buttons and read the page in chunks. Some systems preparse the pages and offer the ability to jump between headings to more quickly get to the information desired.

These are just a few reasons I am intrigued with the Apple rumor. There is hope for well structured documents that can output information in a structured form that can validate to the W3C standards, which browsers now use to properly render the information on the page. I have very little hope in the stories that MS is working toward an XML storage capability for Office documents, because we have heard this same story with the last few Office releases and all were functional lies.



January 22, 2003

W3C breaks the silence with captioning

Meryl notes W3C add captioning by adding a TTWG (Timed Text Working Group). This is a great addition for the W3C and those that have been left in silence.



January 14, 2003

Zeldman discusses XHTML 2

Zeldman provides insight into XHTML 2, which provides a response and agreement with Mark Pilgrim's Semantic Obsolescence rant.



January 5, 2003

More future proofing information

Speaking of future proofing your information, Mark discusses CMS and information reuse. One quote that brings this to light is:

This ties you to your content management system. The further removed your raw data is from your published form, the harder it will be to migrate away from the tools that convert one to the other.

Mark also discusses how using HTML he then created PDF files of his Dive into Accessibility essays. HTML has much of the semantic tools needed and the structure to provide a reusable information repository.



Smart Mobs and Emergence provide sparks

I began reading Smart Mobs by Howard Rheingold the past few days. It is a fantastic book that covers a lot of ground, including free riders, game theory, mobile technology, information creation, and information use and reuse. The book is proving to be an excellent follow-on to Steven Johnson's Emergence. The two books are wonderful mind-joggers and fodder for new preceptions about information, technology, and the world around us. A trait that both share is excellent bibliographies and end-notes (the end notes in both books were not very user friendly and would seem to be structured for hypertexting and not paper books).

These two books put the focus on being future friendly, which does not mean any thing new, but reinforces my belief in properly structured information. Information use and reuse are the key elements in both books, which embrace bottom-up information creation and knowledge sharing. The need for access to information drives Smart Mobs, whether it is to grow open development or for mobile use access is important. The best access environment we have in place at the moment is valid HTML/XHTML that is used to properly structure the information.

This also requires thinking through every pixel on a Web page and understanding its purpose. Understanding the user will help provide a framework for building information interfaces. The information/content should take importance also that is why users are reading, not the entertaining graphics. Keep in mind we structured information can be reused on mobile devides that may not use your images, information may be scraped and repurposed, information may be printed, or read aloud to a person using a site reader while they are driving or read to a person with visual difficulties.

You may want to get your hands on either or both books and take a look for yourself and you may be inspired in new ways or have your beliefs in information and its used renewed.



April 16, 2002

Related to using the proper URL in the doctype, IE 6 renders table content as centered when wrapping the table with center tag. The article explains when this happens and how to work around the problem. One option is not using centering (either in a div align or in the deprecated center tag). It seems setting the CSS is the best work around. We have found using the center tag to be far more problematic than the div align usage of center. (Yes, we have to support "older" browsers at work).


Zeldman explains proper Doctype usage to have the browser use the Doctype you intend it to use. Many Web development applications leave off the URL from the Doctype statement, which renders the lowest common denominator in many browsers.


April 2, 2002

Accessibility benefits

The W3C provides an overview of the benefits of accessible web design. Building sites so they are accessible is not a task that much be done it is an asset.


February 27, 2002

Steven in an open letter to the Web development community asking why he should redesign with CSS. This letter covers a lot of ground and offers good insights into the arguments for and against the move to Web standards. [hat tip Xblog]


February 11, 2002

Shirley Chan offers Web Site Management-Policy and Standards, which provide a great way to move toward consistancy on a large site.


February 8, 2002

The U.S. Government urges to focus its tech developers to focus on standards for XML. This and more is stated discussed in this O'Reilly Net article.


January 16, 2002

Shirley Kaiser discusses accessibility and Adobe Acrobat on her site.

There is one element in Adobe Acrobat that does not meet the Government's 508 compliance, if this is the yard stick being used for accessibility. The area of non-compliance is complex tables. PDF tags only have TABLE, TR, TH, and TD tags available, which do not accept scope. Scope is what helps the complex tables become compliant in HTML. The only acceptable method for providing information in complex tables is HTML, at this point. One work around is to make the complex tables attachments or addendum and remove them from the original document (should the complex tables be provided in PDF format) and only supply them in HTML.

To keep it clear a complex table is one that has more than one set of header rows and often one of the header rows would span a selection of other rows. An example would be a table showing fiscal quarters of the year and the months that fall within these quarters, which would then show rows of related numbers. The top two rows create a complex header as each quarter header spans three rows and defines the months directly below them. Voice readers will capture these relationships quite well with the use of "scope tags" in HTML (this would look like <th scope="col"> for header tags and <td scope="row" in the first cell of the table's rows). Unfortunately, PDF does not have a corresponding tag.

I also pointed out that the more current Web browsers permit using CSS with printer designations that allow for a better representation of the information. This would help those people build one application, whether it be a Website or a PDF that prints in the desired manner and is accessible.

This may help keep yourself and your readers in the clear if 508 is their standard upon, which their accessibility work is being performed. Unfortunately there are no compliance standards, only guidelines. But, for most federal government organizations it is meet all of the targets to be compliant. 508 is a pass/fail hurdle.

Further information on 508 may be found at www.usability.gov/accessibility/index.html.



January 7, 2002

Standards are moving quickly and being embraced, states CNet News. The article points out the benefits and strengths of having standards and building to and on standards. Oh yes, Jeffery Zeldman provides his input, so this really is a standards article of merit.


January 6, 2002

Moving to XHTML and general updates

There are some changes around here. The links page has been updated with some new links, updated links, and a few removed (ones that I was not visiting for various reasons or had gone dead).

The links and about pages are both converted to XHTML and are validating, for the most part, to XHTML Transitional. The next step will be to get this section, Off the Top, to validate. This will be a little more effort as it will require making some edits to the templates and internal code validation. Not a monsterous task, but a task none-the-less. A large part of the conversion in this section is creating compliant output from non-standard input. Much of this section does not use starting paragraph tags (<p>), which will take some work to ammend.

This means that this site is finally moving toward being standards compliant. This means that it will be easier to display information across browsers (standards compliant browsers, which most are becoming), ease of maintenance, and information reuse.



December 16, 2001

Zeldman has been busy while I was a way. It is always good to keep an eye on what Jeffery is upto, particularly when he is talking about the the Web and standards.


December 6, 2001

Previous Month

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License.