Content management is back at the forefront of every aspect of my digital life again. Content management revolves around keeping information current, accurate, and reusable (there are many more elements, but these cut to the core of many issues). Maintaining Websites and providing information resources on the broader Internet have revolved around static Web pages or information stored in MS Word, PDF files, etc. Content management has been a painful task of keeping this information current and accurate across all these various input and output platforms. This brings us to content management systems (CMS).
As I pointed to earlier, there are good resources for getting and understanding CMS and how our roles change when we implement a CMS. Important to understanding is the separation of content (data and information), from the presentation (layout and style), and from the application (PDF, Web page, MS Word document, etc.). This requires an input mechanism, usually a form that captures the information and places it in is data/information store, which may be a database, XML document, or a combination of these. This also provides for a workflow process that involved proofing and editing the information along with versioning the information.
Key to the CMS is separation of content, which means there needs to be a way to be a method of keeping links aside from the input flow. Mark Baker provides a great article, What Does Your Content Management System Call This Guy about how to handle links. Links are an element that separates the CMS-lite tools (Blogger, Movable Type, etc.) from more robust CMS (other elements of difference are more expansive workflow, metadata capturing, and content type handling (images, PDF, etc. and their related metadata needs)). Links in many older systems, often used for newspaper and magazine publications (New York Times and San Francisco Chronicle) placed their links outside of the body of the article. The external linking provided an easy method of providing link management that helps ensure there are no broken links (if an external site changes the location (URL) it there really should only be one place that we have to modify that link, searching every page looking for links to replace). The method in the Baker article outlines how many current systems provide this same service, which is similar to Wiki Wiki's approach. The Baker outlined method also will benefit greatly from all of the Information Architecture work you have done to capture classifications of information and metadata types (IA is a needed and required part of nearly every development process).
What this gets us is content that we can easily output to a Web site in HTML/XHTML in a template that meets all accessibility requirements, ensures quality assurance has been performed, and provides a consistent presentation of information. The same information can be output in a more simple presentation template for handheld devices (AvantGo for example) or WML for WAP. The same information can be provided in an XML document, such as RSS, which provides others access to information more easily. The same information can be output to a template that is stored in PDF that is then sent to a printer to output in a newsletter or the PDF distributed for the users to print out on their own. The technologies for information presentation are ever changing and CMS allows us to easily keep up with these changes and output the information in the "latest and greatest", while still being able to provide information to those using older technologies.