Off the Top: RSS Entries
Showing posts: 1-15 of 39 total posts
2025 Vanderwal.net Backend Modernization is Done
A couple years ago I thought I would update the backend code from PHP 5.6 to PHP 7 and initial progress on it was hindered by time available.
Planning the Modernization Work
A few weeks back I started looking at it again and mapped it out properly like a project. I realized PHP 7 was deprecated and I should really head to PHP 8, so that target was set. I was planning on keeping things relatively simple using a database connection quite similar to what I had used, but digging through PHP 8 books and resources on O’Reilly Learning Platform everything was using a newer more flexible method. After digging further I took the route that would take a bit more work modifying existing code (some going back to 2000 and 2001). But, as I dug into the work I realized I was only needing to modify and modernize about 20% to 30% of code on the pages and templates.
In doing this I also realized my old method of security around the system management backend was no longer working, so it had to be rewritten as well. That meant rebuilding the backend screens. Those updates went live two days ago on the 19th.
With that done it was back to the last third or so of the pages and templates that are public facing. I had already reworked the category output pages and adding pagination to them. No longer will all 121 Folksonomy categorized posts show up on one screen, only 15 at a time will. The “Personal” category has 369 posts (it is a blog so it is about me, you see, but just not all of it).
The RSS feed received a very minor update to RSS 0.92 to keep in line with many of the OG methods that remain.
The Actual Homepage has been Restructured
The homepage for vanderwal.net has been restructured to make it easier to find information that isn’t directly in the blog and I get emails and DMs about somewhat regularly. Moving it to two columns helped this. I do need to modify this to flex or grid CSS model as tweaking the layout was rather tedious.
This Modernization was like Changing the Plumbing and Wiring in a Building
This modernization was like bringing the plumbing and wiring of a building up to new building code. The walls and structure are all pretty much the same. The top layer stays the same for now.
This modernization does allow me to hopefully finish setting up webmentions, which I’ve had partly wired since around 2021 or so. I just need the last piece to that to work. There are also other IndieWeb related updates I’m planning on making and have been waiting to get this code updated before modifying and adding them into place. By the way, if you are running your own site and/or blog, the IndieWeb community has a gem. There are a lot of resources in their wiki and pages helping anybody with their own site.
The pagination for the blog is likely going to change from a date with month focussed pagination to a page model with the oldest selection being page 1. The archive page will get a long over due update so it doesn’t stop at 2003 (looks at calendar, yep it is out of date). I’m hoping to have an archive page that shows activity, but also addresses the different post types (essay, journal, and weblog) that only lasted the first few years, but also around the 2014 code update and site move the entry type template went missing.
The category listings pages will also likely get an update and the category page may likely get some ease of moving through the posts over time, beyond general pagination.
Assistance with the Update
This being 2025 the question pops up if and how I was using generative AI as part of this. I was using Claude.ai from Anthropic with some initial questions, then I’d head to O’Reilly’s resources to validate them and learn what I needed to know (it had been about 10 years since I was knee deep into PHP). When coding and modernizing the pages and templates I’d and hit defects I’d run those past Claude to sort out what the issue may be (sometimes missing “;”, others the new query wrapper and parsing method caused me to miss something, or I had deprecated code I hadn’t converted). Claude would point out my errors and instruct me how to correct it. Sometimes it would offer a few options for approaches (some were not quite right and others were good and I needed to select a path - after verifying and learning about them further). It also would crank out code. I gave Claude instructions not to bother with large chunks of my pages and code, which it left alone.
I use Claude stand alone and used is Project function to keep things focussed. I fed it the outlines and high level task areas I have in GitHub and Obsidian and it was keeping track of what was accomplished and how the work met the goals. The most impressive thing, compared to other generative AI options is it was very strict with identifying things not viable in PHP 8 (and its iterative versions) as nothing else did this well. Claude also had the code of pages and templates I had worked on and would point out I was using a structure and method in other page and ask if I shouldn’t use that practice on the page I just fed it to sort out some defect I was working through. My code has had four or more iterations over the 25 years and my early coding wasn’t so hot and still remained. Claude helped my code get more consistent, not by it fixing it, but pointing out I had something good and modern and I should keep consistent with that. By the last couple of templates I didn’t need to have Claude check them as they worked with my own editing, but I still fed them in as it seems to help improve suggestions and catching lack of consistency of my own doing.
A year ago I tried this with OpenAI and its ChatGPT and it was a hot mess. It couldn’t keep PHP versions correct. I try it with every update and I find it really problematic and what it outputs (code and other attempts) as nothing better than mediocre and often not correct.
IDE Use
In the last 10 to 15 years the IDE I’ve used to code and work on vanderwal.net has been from Panic and either Coda or now Nova, which have worked well. I have kept a good firewall between AI assistance and the IDE. I don’t mind type ahead suggestions. But, finding deprecated code to address was something I was going to need. Some friends suggested I try PhpStorm by JetBrains, which seemed good as I’ve used PyCharm a few times in the past and really enjoyed it. I knew I didn’t want VS Code near this, as I’ve pretty much had it with VS Code (I mostly use it with Python for data analytics) due to plug-in issues and lack of ease keeping projects separated.
I picked-up a trial of PHPStorm and after a day or so I had the hang of a good portion of what I needed to do. My favorite part is the setting the exact version of PHP you are working with. It highlights where there are errors and problems. In the last couple of days as I finally was getting the hang of PHP 8 and the methods I was regularly using PHPStorm was helping with type ahead suggestions (there were a few times where I accidentally triggered them when I didn’t want them and nearly turned of that functionality - control Z is your friend). PHPStorm also can make use of GitHub CoPilot, which I don’t find helpful with OpenAI connected to it, but is better with Claude Sonnet. The downside with CoPilot is it doesn’t have access to the Project space in Claude I’ve been working with and therefore its suggestions are less on target - CoPilot with Claude is light years better for PHP than OpenAI offerings). Essentially I didn’t use the incorporated genAI functionality and I was very happy with that setup.
Posting Ease
One of the things I’m looking forward to are slightly better methods for posting to this site and managing posts. Many of the steps beyond creating and posting are manual steps, like kicking off creation of the RSS feed (I do that after a quick review of the created post as it is live, I kick the RSS feed after that review). The alerting the media, or the alerts beyond basic RSS, is also a manual step done after that review. I may automate the combination of those two kicks after a review.
A World without Aaron Swartz
It was a weekend not focussing on technology or internet much, but I saw some usual patterns that are the usual signs that something is not well, but I had other things that took priority of focus. Sunday I gave a quick look and let out that human deep gasp and my kid looked up and asked what was wrong as my head slumped. I was late to the news that Aaron Swartz had taken his life. I don’t know what site I read it but all my screens of services and people I follow closely where sharing the news and their remembrances.
Aaron’s passing was beyond the “he is too young” and “what a shame”, he was not only someone special he was changing the world and had been for quite a while with his sorting through the battles of standardizing RSS, working with the Creative Commons to create a modern equivalent that could be relatively easily used attach to published and shared content allowing much better and more open access to it, he helped contribute to Reddit as it was hatching and stayed with it through to it being purchased. He has done so much more since. But, he died at 26 years old. At 14 when he was partaking in the RSS discussions on listserves nobody knew his age. Nobody had a clue and it wasn’t known until they asked him to come to a gathering to discuss things face to face. This story of his age and the wonderful story about how people found out was spread on listserves I participated in and at around post conference drinks in the early 00s. At the age of 14 Aaron had lore. He had earned the respect and right to be a peer with the early grey beards of the Web that were battling to understand it all, help it work better, and make the world better because of it. Aarron fit right in.
I was at a few events and gatherings that Aaron was at in the early 00s but I never had the chance to work or interact with him. But, I have worked and interacted with many who did have that fortune and even with the lore Aaron had they were impressed by his approach, capabilities, and what he could accomplish. In the tech community it is a meritocracy and you earn credibility by doing. The world has always been changed by those who do, but also by those who are curious. but are guided by an understanding of an optimal right (correct way).
What we lost as a society was not only a young man who earned his place and credibility, and earned it early, but he gave to others openly. All the efforts he put his heart and mind to had a greater benefit to all. The credo in the tech community is to give back more than you take. Aaron did that in spades, which made thinking of a future with whatever he was working on a bit more bright and promising. Aaron’s blog was at the top of my feed reader and was more than worth the time to read. He was a blogger, an open sharer of thoughts and insights, questions as well as the pursuit of the answer to the questions. This is not the human norm, he was a broken one in all the understandings that brings as being outside the mainstream norms, but much like all those in the Apple “Think Different” ad campaign he made a difference by thinking different and being different from those norms.
I love David Weinberger’s Aaron Swartz was not a hacker. He was a builder. as well as his Why we mourn. From the rough edges of hearing friends talk about their working with Aaron and following along with what he shared, we as a society were in for a special future. Doc Searls’ Aaron Swartz and Freecom lays out wonderfully the core of Aaron’s soul as a native to the Net in the virtues of NEA (Nobody owns it; Everybody can use it; Anybody con improve it). Doc also has a great collection of links on his memorial posting, many from those who worked with Aaron or knew him well on his Losing Aaron Swartz page.
Dang it, we are one down. We are down a great one. But, this net, this future, and this society that fills this little planet needs the future we could have had, but now it is ours to work together to build and make great.
How to we get there? Aaron’s first piece of advice from Aaron Swarts: How to get my job is, “Be curious. Read widely. Try new things. I think a lot of what people call intelligence just boils down to curiosity.”
Peace!
Closing Delicious? Lessons to be Learned
There was a kerfuffle a couple weeks back around Delicious when the social bookmarking service Delicious was marked for end of life by Yahoo, which caused a rather large number I know to go rather nuts. Yahoo, has made the claim that they are not shutting the service down, which only seems like a stall tactic, but perhaps they may actually sell it (many accounts from former Yahoo and Delicious teams have pointed out the difficulties in that, as it was ported to Yahoo’s own services and with their own peculiarities).
Redundancy
Never the less, this brings-up an important point: Redundancy. One lesson I learned many years ago related to the web (heck, related to any thing digital) is it will fail at some point. Cloud based services are not immune and the network connection to those services is often even more problematic. But, one of the tenants of the Personal InfoCloud is it is where you keep your information across trusted services and devices so you have continual and easy access to that information. Part of ensuring that continual access is ensuring redundancy and backing up. Optimally the redundancy or back-up is a usable service that permits ease of continuing use if one resource is not reachable (those sunny days where there's not a cloud to be seen). Performing regular back-ups of your blog posts and other places you post information is valuable. Another option is a central aggregation point (these are long dreamt of and yet to be really implemented well, this is a long brewing interest with many potential resources and conversations).
With regard to Delicious I’ve used redundant services and manually or automatically fed them. I was doing this with Ma.gnol.ia as it was (in part) my redundant social bookmarking service, but I also really liked a lot of its features and functionality (there were great social interaction design elements that were deployed there that were quite brilliant and made the service a real gem). I also used Diigo for a short while, but too many things there drove me crazy and continually broke. A few months back I started using Pinboard, as the private reincarnation of Ma.gnol.ia shut down. I have also used ZooTool, which has more of a visual design community (the community that self-aggregates to a service is an important characteristic to take into account after the viability of the service).
Pinboard has been a real gem as it uses the commonly implemented Delicious API (version 1) as its core API, which means most tools and services built on top of Delicious can be relatively easily ported over with just a change to the URL for source. This was similar for Ma.gnol.ia and other services. But, Pinboard also will continually pull in Delicious postings, so works very well for redundancy sake.
There are some things I quite like about Pinboard (some things I don’t and will get to them) such as the easy integration from Instapaper (anything you star in Instapaper gets sucked into your Pinboard). Pinboard has a rather good mobile web interface (something I loved about Ma.gnol.ia too). Pinboard was started by co-founders of Delicious and so has solid depth of understanding. Pinboard is also a pay service (based on an incremental one time fee and full archive of pages bookmarked (saves a copy of pages), which is great for its longevity as it has some sort of business model (I don’t have faith in the “underpants - something - profit” model) and it works brilliantly for keeping out spammer (another pain point for me with Diigo).
My biggest nit with Pinboard is the space delimited tag terms, which means multi-word tag terms (San Francisco, recent discovery, etc.) are not possible (use of non-alphabetic word delimiters (like underscores, hyphens, and dots) are a really problematic for clarity, easy aggregation with out scripting to disambiguate and assemble relevant related terms, and lack of mainstream user understanding). The lack of easily seeing who is following my shared items, so to find others to potentially follow is something from Delicious I miss.
For now I am still feeding Delicious as my primary source, which is naturally pulled into Pinboard with no extra effort (as it should be with many things), but I'm already looking for a redundancy for Pinboard given the questionable state of Delicious.
The Value of Delicious
Another thing that surfaced with the Delicious end of life (non-official) announcement from Yahoo was the incredible value it has across the web. Not only do people use it and deeply rely on it for storing, contextualizing links/bookmarks with tags and annotations, refinding their own aggregation, and sharing this out easily for others, but use Delicious in a wide variety of different ways. People use Delicious to surface relevant information of interest related to their affinities or work needs, as it is easy to get a feed for not only a person, a tag, but also a person and tag pairing. The immediate responses that sounded serious alarm with news of Delicious demise were those that had built valuable services on top of Delicious. There were many stories about well known publications and services not only programmatically aggregating potentially relevant and tangential information for research in ad hoc and relatively real time, but also sharing out of links for others. Some use Delicious to easily build “related information” resources for their web publications and offerings. One example is emoted by Marshall Kirkpatrick of ReadWriteWeb wonderfully describing their reliance on Delicious
It was clear very quickly that Yahoo is sitting on a real backbone of many things on the web, not the toy product some in Yahoo management seemed to think it was. The value of Delicious to Yahoo seemingly diminished greatly after they themselves were no longer in the search marketplace. Silently confirmed hunches that Delicious was used as fodder to greatly influence search algorithms for highly potential synonyms and related web content that is stored by explicit interest (a much higher value than inferred interest) made Delicious a quite valued property while it ran its own search property.
For ease of finding me (should you wish) on Pinboard I am http://pinboard.in/u:vanderwal
----
Good relevant posts from others:
As If Had Read
The idea of a tag "As If Had Read" started as a riff off of riffs with David Weinberger at Reboot 2008 regarding the "to read" tag that is prevalent in many social bookmarking sites. But, the "as if had read" is not as tongue-in-cheek at the moment, but is a moment of ah ha!
I have been using DevonThink on my Mac for 5 or more years. It is a document, note, web page, and general content catch all that is easily searched. But, it also pulls out relevance to other items that it sees as relevant. The connections it makes are often quite impressive.
My Info Churning Patterns
I have promised for quite a few years that I would write-up how I work through my inbound content. This process changes a lot, but it is back to a settled state again (mostly). Going back 10 years or more I would go through my links page and check all of the links on it (it was 75 to 100 links at that point) to see if there was something new or of interest.
But, that changed to using a feedreader (I used and am back to using Net News Wire on Mac as it has the features I love and it is fast and I can skim 4x to 5x the content I can in Google Reader (interface and design matters)) to pull in 400 or more RSS feeds that I would triage. I would skim the new (bold) titles and skim the content in the reader, if it was of potential interest I open the link into a browser tab in the background and just churn through the skimming of the 1,000 to 1,400 new items each night. Then I would open the browser to read the tabs. At this stage I actually read the content and if part way through it I don't think it has current or future value I close the tab. But, in about 90 minutes I could triage through 1,200 to 1,400 new RSS feed items, get 30 to 70 potential items of value open in tabs in a browser, and get this down to a usual 5 to 12 items of current or future value. Yes, in 90 minutes (keeping focus to sort the out the chaff is essential). But, from this point I would blog or at least put these items into Delicious and/or Ma.gnolia or Yahoo MyWeb 2.0 (this service was insanely amazing and was years ahead of its time and I will write-up its value).
The volume and tools have changed over time. Today the same number of feeds (approximately 400) turn out 500 to 800 new items each day. I now post less to Delicious and opt for DevonThink for 25 to 40 items each day. I stopped using DevonThink (DT) and opted for Yojimbo and then Together.app as they had tagging and I could add my context (I found my own context had more value than DevonThink's contextual relevance engine). But, when DevonThink added tagging it became an optimal service and I added my archives from Together and now use DT a lot.
Relevance of As if Had Read
But, one of the things I have been finding is I can not only search within the content of items in DT, but I can quickly aggregate related items by tag (work projects, long writing projects, etc.). But, its incredible value is how it has changed my information triage and process. I am now taking those 30 to 40 tabs and doing a more in depth read, but only rarely reading the full content, unless it is current value is high or the content is compelling. I am acting on the content more quickly and putting it into DT. When I need to recall information I use the search to find content and then pull related content closer. I not only have the item I was seeking, but have other related content that adds depth and breath to a subject. My own personal recall of the content is enough to start a search that will find what I was seeking with relative ease. But, were I did a deeper skim read in the past I will now do a deeper read of the prime focus. My augmented recall with the brilliance of DevonThink works just as well as if I had read the content deeply the first time.
Inline Messaging
Many of the social web services (Facebook, Pownce, MySpace, Twitter, etc.) have messaging services so you can communication with your "friends". Most of the services will only ping you on communication channels outside their website (e-mail, SMS/text messaging, feeds (RSS), etc.) and require the person to go back to the website to see the message, with the exception of Twitter which does this properly.
Inline Messaging
Here is where things are horribly broken. The closed services (except Twitter) will let you know you have a message on their service on your choice of communication channel (e-mail, SMS, or RSS), but not all offer all options. When a message arrives for you in the service the service pings you in the communication channel to let you know you have a message. But, rather than give you the message it points you back to the website to the message (Facebook does provide SMS chunked messages, but not e-mail). This means they are sending a message to a platform that works really well for messaging, just to let you know you have a message, but not deliver that message. This adds extra steps for the people using the service, rather than making a simple streamlined service that truly connects people.
Part of this broken interaction is driven by Americans building these services and having desktop-centric and web views and forgetting mobile is not only a viable platform for messaging, but the most widely used platform around the globe. I do not think the iPhone, which have been purchased by the owners and developers of these services, will help as the iPhone is an elite tool, that is not like the messaging experience for the hundreds of millions of mobile users around the globe. Developers not building or considering services for people to use on the devices or application of their choice is rather broken development these days. Google gets it with Google Gears and their mobile efforts as does Yahoo with its Yahoo Mobile services and other cross platform efforts.
Broken Interaction Means More Money?
I understand the reasoning behind the services adding steps and making the experience painful, it is seen as money in their pockets through pushing ads. The web is a relatively means of tracking and delivering ads, which translates into money. But, inflicting unneeded pain on their customers can not be driven by money. Pain on customers will only push them away and leave them with fewer people to look at the ads. I am not advocating giving up advertising, but moving ads into the other channels or building solutions that deliver the messages to people who want the messages and not just notification they have a message.
These services were somewhat annoying, but they have value in the services to keep somebody going back. When Pownce arrived on the scene a month or so ago, it included the broken messaging, but did not include mobile or RSS feeds. Pownce only provides e-mail notifications, but they only point you back to the site. That is about as broken as it gets for a messaging and status service. Pownce is a beautiful interface, with some lightweight sharing options and the ability to build groups, and it has a lightweight desktop applications built on Adobe AIR. The AIR version of Pownce is not robust enough with messaging to be fully useful. Pownce is still relatively early in its development, but they have a lot of fixing of things that are made much harder than they should be for consuming information. They include Microfomats on their pages, where they make sense, but they are missing the step of ease of use for regular people of dropping that content into their related applications (putting a small button on the item with the microformat that converts the content is drastically needed for ease of use). Pownce has some of the checkboxes checked and some good ideas, but the execution of far from there at the moment. They really need to focus on ease of use. If this is done maybe people will comeback and use it.
Good Examples
So who does this well? Twitter has been doing this really well and Jaiku does this really well on Nokia Series60 phones (after the first version Series60). Real cross platform and cross channel communication is the wave of right now for those thinking of developing tools with great adoption. The great adoption is viable as this starts solving technology pain points that real people are experiencing and more will be experiencing in the near future. (Providing a solution to refindability is the technology pain point that del.icio.us solved.) The telecoms really need to be paying attention to this as do the players in all messaging services. From work conversations and attendees to the Personal InfoCloud presentation, they are beginning to get the person wants and needs to be in control of their information across devices and services.
Twitter is a great bridge between web and mobile messaging. It also has some killer features that add to this ease of use and adoption like favorites, friends only, direct messaging, and feeds. Twitter gets messaging more than any other service at the moment. There are things Twitter needs, such as groups (selective messaging) and an easier means of finding friends, or as they are now appropriately calling it, people to follow.
Can we not all catch up to today's messaging needs?
Life Data Streams Bubbling
Emily Chang's post about her My Data Stream brought back memories from a ton of conversations last year. I captured a few of these ideas in a relatively short Life Data Stream post over at Personal InfoCloud, which has comments turned on.
You may want to take a look at TechMeme for related posts.
Stikkit Adds an API
Stikkit has finally added an API for Stikkit. This makes me quite happy. Stikkit has great ease of information entry and it is perfect for adding annotations to web-based information.
Stikkit is My In-line Web Triage
I have been using Stikkit, from the bookmarklet, as my in-line web information triage. If I find an event or something I want to come back to latter (other than to read and bookmark) I pop that information into Stikkit. Most often it is to remind me of deadlines, events, company information, etc. I open the Stikkit bookmarklet and add the information. The date information I add is dumped into my Stikkit calendar, names and addresses are put into the Stikkit address book, and I can tag them for context for easier retrival.
Now with the addition of the API Stikkit is now easy to retrieve a vCard, ical, or other standard data format I can drop into my tools I normally aggregate similar information. I do not need to refer back to Stikkit to copy and paste (or worse mis-type) into my work apps.
I can also publish information from my preferred central data stores to Stikkit so I have web access to events, to dos, names and addresses, etc. From Stikkit I can then share the information with only those I want to share that information with.
Stikkit is growing to be a nice piece for microcontent tracking in my Personal InfoCloud.
Rebranding and Crossbranding of .net Magazine
From an e-mail chat last week I found out that .net magazine (from the UK) is now on the shelves in the US as "Web Builder". Now that I have this knowledge I found the magazine on my local bookstore shelves with ease. Oddly, when I open the cover it is all ".net".
Rebranding and Crossbranding
In the chat last week I was told the ".net" name had a conflict with a Microsoft product and the magazine is not about the Microsoft product in the slightest, but had a good following before the MS product caught on. Not so surprisingly the ".net" magazine does not have the same confusion in the UK or Europe.
So, the magazine had a choice to not get noticed or rebrand the US version to "Web Builder" and put up with the crossbranding. This is not optimal, as it adds another layer of confusion for those of us that travel and are used to the normal name of the product and look only for that name. Optimally one magazine name would be used for the English language web design and development magazine. If this every happens it will mean breaking a well loved magazine name for the many loving fans of it in the UK and Europe
What is Special About ".net" or "Web Builder"?
Why do I care about this magazine? It is one of the few print magazines about web design and web development. Not only is it one of the few, but it flat out rocks! It takes current Web Standards best practices and makes them easy to grasp. It is explaining all of the solid web development practices and how to not only do them right, but understand if you should be doing them.
I know, you are saying, "but all of this stuff is already on the web!" Yes, this stuff is on the web, but not every web developer lives their life on the web, but most importantly, many of the bosses and managers that will approve this stuff do not read stuff on the web, they still believe in print. Saying the managers need to grow-up and change is short-sighted. One of the best progressive thinkers on technology, Doc Searls is on the web, but he also has a widely read regular column in Linux Journal. But, for me the collection of content in ".net" is some of the best stuff out there. I read it on planes and while I am waiting for a meeting or appointment.
I know the other thing many of you are saying, "but it is only content from UK writers!" Yes, so? The world is really flat and where somebody lives really makes little difference as we are all only a mouse click away from each other. We all have the same design and development problems as we are living with the same browsers and similar people using what we design and build. But, it is also amazing that a country that is a percentage the size of the US has many more killer web designers and developers than the US. There is some killer stuff going on in the UK on the web design and development front. There is great thought, consideration, and research that goes into design and development in the UK and Europe, in the US it is lets try it and see if it works or breaks (this is good too and has its place). It is out of the great thought and consideration that the teaching and guiding can flow. It also leads to killer products. Looking at the Yahoo Europe implementations of microformats rather far and wide in their products is telling, when it has happened far slower in the Yahoo US main products.
Now I am just hoping that ".net" will expand their writing to include a broader English speaking base. There is some killer talent in the US, but as my recent trip to Australia showed there is also killer talent there too. Strong writing skills in English and great talent would make for a great global magazine. It could also make it easier to find on my local bookstore shelves (hopefully for a bit cheaper too).
Cultures of Simplicity and Information Structures
Two Conferences Draw Focus
I am now getting back to responding to e-mail sent in the last two or three weeks and digging through my to do list. As time wears I am still rather impressed with both XTech and the Microlearning conferences. Both have a focus on information and data that mirrors my approaches from years ago and are the foundation for how I view all information and services. Both rely on well structured data. This is why I pay attention and keep involved in the information architecture community. Well structured data is the foundation of what falls into the description of web 2.0. All of our tools for open data reuse demands that the underlying data is structured well.Simplicity of the Complex
One theme that continually bubbled up at Microlearning was simplicity. Peter A. Bruck in his opening remarks at Microlearning focussed on simplicity being the means to take the complex and make it understandable. There are many things in the world that are complex and seemingly difficult to understand, but many of the complex systems are made up of simple steps and simple to understand concepts that are strung together to build complex systems and complex ideas. Every time I think of breaking down the complex into the simple components I think of Instructables, which allows people to build step-by-step instructions for anything, but they make each of the steps as reusable objects for other instructions. The Instructables approach is utterly brilliant and dead in-line with the microlearning approach to breaking down learning components into simple lessons that can be used and reused across devices, based on the person wanting or needing the instruction and providing it in the delivery media that matches their context (mobile, desktop, laptop, tv, etc.).
Simple Clear Structures
This structuring of information ties back into the frameworks for syndication of content and well structured data and information. People have various uses and reuses for information, data, and media in their lives. This is the focus on the Personal InfoCloud. This is the foundation for information architecture, addressable information that can be easily found. But, in our world of information floods and information pollution due to there being too much information to sort through, findability of information is important as refindability (this is rarely addressed). But, along with refindability is the means to aggregate the information in interfaces that make sense of the information, data, and media so to provide clarity and simplicity of understanding.
Europe Thing Again
Another perspective of the two conferences was they were both in Europe. This is not a trivial variable. At XTech there were a few other Americans, but at Microlearning I was the only one from the United States and there were a couple Canadians. This European approach to understanding and building is slightly different from the approach in the USA. In the USA there is a lot of building and then learning and understanding, where as in Europe there seems to be much more effort in understanding and then building. The results are somewhat different and the professional nature of European products out of the gate where things work is different than in the USA. This was really apparent with System One, which is an incredible product. System One has all the web 2.0 buzzwords under the hood, but they focus on a simple to use tool that pulls together the best of the new components, but only where it makes sense to create a simple tool that addresses complex problems.
Culture of Understanding Complex to Make Simple
It seems the European approach is to understand and embrace the complex and make it simple through deep understanding of how things are built. It is very similar to Instructables as a culture. The approach in the USA seems to include the tools, but have lacked the understanding of the underlying components and in turn have left out elements that really embrace simplicity. Google is a perfect example of this approach. They talk simplicity, but nearly every tool is missing elements that make it fully usable (calendar not having sync, not being able to only have one or two Google tools on rather than everything on). This simplicity is well understood by the designers and they have wonderful solutions to the problems, but the corporate culture of churning things out gets in the way.
Breaking It Down for Use and Reuse
Information in simple forms that can be aggregated and viewed as people need in their lives is essential to us moving forward and taking the pain out of technology that most regular people experience on a daily basis. It is our jobs to understand the underlying complexity, create simple usable and reusable structures for that data and information, and allow simple solutions that are robust to be built around that simplicity.
More XTech 2006
I have had a little time to sit back and think about XTech I am quite impressed with the conference. The caliber of presenter and the quality of their presentations was some of the best of any I have been to in a while. The presentations got beneath the surface level of the subjects and provided insight that I had not run across elsewhere.
The conference focus on browser, open data (XML), and high level presentations was a great mix. There was much cross-over in the presentations and once I got the hang that this was not a conference of stuff I already knew (or presented at a level that is more introductory), but things I wanted to dig deeper into. I began to realize late into the conference (or after in many cases) that the people presenting were people whose writting and contributions I had followed regularly when I was doing deep development (not managing web development) of web applications. I changed my focus last Fall to get back to developing innovative applications, working on projects that are built around open data, and that filled some of the many gaps in the Personal InfoCloud (I also left to write, but that did get side tracked).
As I mentioned before, XTech had the right amount of geek mindset in the presentations. The one that really brought this to the forefront of my mind was on XForms, an Alternative to Ajax by Erik Bruchez. It focussed on using XForms as a means to interact with structured data with Ajax.
Once it dawned on me that this conference was rather killer and I sould be paying attention to the content and not just those in the floating island of friends the event was nearly two-thirds the way through. This huge mistake on my part was the busy nature of things that lead up to XTech, as well as not getting there a day or two earlier to adjust to the time, and attend the pre-conference sessions and tutorials on Ajax.
I was thrilled ot see the Platial presentation and meet the makers of the service. When I went to attend Simon Willison's presentation rather than attending the GeoRSS session, I realized there was much good content at XTech and it is now one on my must attend list.
As the conference was progressing I was thinking of all of the people that would have really benefitted and enjoyed XTech as well. A conference about open data and systems to build applications with that meet real people's needs is essential for most developers working out on the live web these days.
If XTech sounded good this year in Amsterdam, you may want to note that it will be in Paris next year.
Changing the Flow of the Web and Beyond
In the past few days of being wrapped up in moving this site to a new host and client work, I have come across a couple items that have similar DNA, which also relate to my most recent post on the Come to Me Web over at the Personal InfoCloud.
Sites to Flows
The first item to bring to light is a wonderful presentation, From Sites to Flows: Designing for the Porous Web (3MB PDF), by Even Westvang. The presentation walks through the various activities we do as personal content creators on the web. Part of this fantastic presentation is its focus on microcontent (the granular content objects) and its relevance to context. Personal publishing is more than publishing on the web, it is publishing to content streams, or "flows" as Even states it. These flows of microcontent have been used less in web browsers as their first use, but consumed in syndicated feeds (RDF, RSS/Atom, Trackback, etc.). Even moves to talking about Underskog, a local calendaring portal for Oslo, Norway.
The Publish/Subscribe Decade
Salim Ismail has a post about The Evolution of the Internet, in which he states we are in the Publish/Subscribe Decade. In his explanation Salim writes:
The web has been phenomonally successful and the amount of information available on it is overwhelming. However, (as Bill rightly points out), that information is largely passive - you must look it up with a browser. Clearly the next step in that evolution is for the information to become active and tell you when something happens.
It is this being overwhelmed with information that has been of interest to me for a while. We (the web development community) have built mechanisms for filtering this information. There are many approaches to this filtering, but one of them is the subscription and alert method.
The Come to Me Web
It is almost as if I had written Come to Me Web as a response or extension of what Even and Salim are discussing (the post had been in the works for many weeks and is an longer explanation of a focus I started putting into my presentations in June. This come to me web is something very few are doing and/or doing well in our design and development practices beyond personal content sites (even there it really needs a lot of help in many cases). Focussing on the microcontent chunks (or granular content objects in my personal phraseology) we can not only provide the means for others to best consume our information we are providing, but also aggregate it and provide people with better understanding of the world around them. More importantly we provide the means to best use and reuse the information in people's lives.
Important in this flow of information is to keep the source and identity of the source. Having the ability to get back to the origination point of the content is essential to get more information, original context, and updates. Understanding the identity of the content provider will also help us understand perspective and shadings in the microcontent they have provided.
Minor Changes in Off the Top
Last night I was able to add back the Quick Links (my current bookmarks from del.icio.us. This was due in great part to the folks at del.icio.us who now have a JavaScript that makes the process easy on you and easy on them (I am not sure how accessible this is as I have not tested it, but normally they are not accessible).
I also brought back to the link to just the Off the Top RSS feed, which has nothing but the last 10 entries in archaic RSS .91 format. I still am offering the wonderful Feedburner for Off the Top option, which has Off the Top entries, my del.icio.us entries, and my Flickr photo feed all bundled in one. I have quite a few people reading this in RSS on mobile devices at the moment and I thought I would make it easier for other that are going that route to get just the content of Off the Top.
Designing for the Personal InfoCloud presentation at WebVisions 2005 Wrap-up
I have posted my presentation from yesterday's session at WebVisions, in Portland, Oregon. The files, Designing for the Personal InfoCloud are in PDF format and weigh in at 1.3MB.
I really had a blast at the conference and wish I could have been there the whole day. I will have to say from the perspective of a speaker it is a fantastically run conference. Brad Smith of Hot Pepper Studios did a knock out job pulling this conference together. It should be on the must attend list for web developers. I was impressed with the speakers, the turn out, and how well everything was run. Bravo!
WebVisions is held in one of my favorite cities, Portland, Oregon, which has some of the best architecture and public planning of any North American city. I have more than 300 photos I have taken in 48 hours and will be posting many at Flickr in the next couple of days.
Replacement RSS and XML Button
Mike just posted a killer international and language-free RSS logo button on his site. I really like it. Mainly is works for those of use who understand the RSS text version, but for those who are not as technically forward or in non-English/Western languages this could still work. The RSS and XML text on the buttons always need explanation to those not familiar with the terms. The end of many of the tutorials is often, "just click it, you do not really need to know what it means, just click". Something tells me Mike is on to something profound yet so wonderfully simple.
Response to Usability of Feeds
Jeffrey Veen has a wonderful post about the usability of RSS/Atom/feeds on his site. I posted a response that I really want to keep track of here, so it follows...
I think Tom's pointer to the BBC is a fairly good transition to where we are heading. It will take the desktop OS or browser to make it easier. Neither of these are very innovative or quickly adaptive on the Windows side of the world.
Firefox was the first browser (at least that I know of) to handle RSS outside the browser window, but it was still done handled in a side-window of the browser. Safari has taken this to the next step, which is to use a mime-type to connect the RSS feed to the desktop device of preference. But, we are still not where we should be, which is to click on the RSS button on a web page and dump that link into ones preferred reader, which may be an application on the desktop or a web/internet based solution such as Bloglines.
All of this depends on who we test as users. Many times as developers we test in the communities that surround us, which is a skewed sample of the population. If one is in the Bay Area it may be best to go out to Stockton, Modesto, Fresno, or up to the foothills to get a sample of the population that is representative of those less technically adept, who will have very different usage patterns from those we normally test.
When we test with these lesser adept populations it is the one-click solutions that make the most sense. Reading a pop-up takes them beyond their comfort zone or capability. Many have really borked things on their devices/machines by trying to follow directions (be they well or poorly written). Most only trust easy solutions. Many do not update their OS as it is beyond their trust or understanding.
When trends start happening out in the suburbs, exurbs, and beyond the centers of technical adeptness (often major cities) that is when they have tipped. Most often they tip because the solutions are easy and integrated to their technical environment. Take the Apple iPod, it tipped because it is so easy to set up and use. Granted the lack of reading is, at least, an American problem (Japanese are known to sit down with their manuals and read them cover to cover before using their device).
We will get to the point of ease of use for RSS and other feeds in America, but it will take more than just a text pop-up to get us there.