The other day, I found an Internet Explorer ordered-list bug I hadn't seen before:

A numbered list of items which looks right in other browsers will be rendered with the number "1" in front of every item in IE7.
This happens when a width is added to the "LI" tags in an ordered list. Internet Explorer 7 will not increment the numbers.

Then IE will render the numbers correctly. (Except, of course for the decimal-leading-zero property which it still ignores.)

<= Back to previous section "Create a Branch"

Continue reading below break...




The version number will be a four-digit number like 1.1.2.1: the 2 represents the branch number.




Now that you have created a project, the other people on your team should check it out from CVS. If they aren't sure how to check out a project from CVS, have them view this tutorial then return here.
Scenario walkthrough
We'll be following the workflow shown in the diagram below. This article will employ a scenario of two programmers, Paul and Wing, working on separate branches of the same project.
The main branch of development, or trunk, is labeled WD in the diagrams, because Wing will be developing in that branch. Paul first branches off main and modifies some files. His new branch is called p1test. He starts working in that branch.



Then he switches to the main branch and merges the code from the p1test branch down into the main branch, fixing any conflicts that have arisen.
Note: Merges are performed INTO the branch you are currently working in, not FROM the branch you are working in.
Paul's tagged milestones give CVS a point of reference to determine what has changed since the milestone. Tags can also be used to denote stable versions or recovery points.
Now development continues. Paul opens the p1test branch again and continues working. When he reaches another milestone, he commits his changed files and tags them as P2.
At this point, Wing, working in the main branch, merges Paul's changes back down to the main branch, and resolves the conflicts. This scenario can be repeated as changes are made until work in the branch is completed and the final merge is done.


We are now at this point in the workflow diagram.

The starting tag Root_p1test is a point of reference that will be used by Eclipse during the merging process to determine what has changed since the last time the code branches were identical.
When the branch is first created, the branch and trunk are identical. As work proceeds in both paths, differences arise between the two lines of code. At various points in development the branch and trunk are resynched together. Logical points for merges down to the main branch can be when new features in the branch reach a stable enough point for testing, or if the branch is still unstable, but a lot of work has occurred down in the trunk, the latest version of the trunk should be periodically merged up to the branch so that the branch does not fall too far out of synch with the main trunk.
If you wait too long to merge while parallel development occurs, it is possible to have problems with enormous code conflicts which need to be resolved. Merging is more of an art than a science, and it can take a fair bit of thought and debugging to make things work right during a big merge. See Branching and Merging Anti-Patterns for information on "Big Bang Merges."

<= Back to previous section "Create the Workspace and Project"
Now that you've created a new project, check it into the CVS repository so your team can collaborate on it.
Continue reading below break:

<= Back to previous section "Branching Strategies"
Continue reading below break...
At this point the project will be tagged as a milestone and branched, so that two developers can work on different tasks. The discreet lines of code will then be merged together and any conflicts resolved. After tagging again, and further work, a second merge will be performed.


To prevent this, projects can be developed in concurrent paths, with one path being devoted to the main line of development and others set aside for testing new features or bugfixes. One line can be kept stable or close to stable, with others not yet even in testing. Several common configuration strategies are used for team collaboration on software development.
There are several commonly used branching strategies. The choice can be determined by the requirements of the project or the workflow preferences of the team. Branches can correspond to phases of development, specific tasks, or particular releases.
Chris Birmele of Microsoft Australia recommends validating your chosen strategy by considering what would happen in a change scenario - would you have to completely restructure your branches to compensate? Try to predict the worst case and plan for it.
A commonly used branching strategy is one that corresponds to the three phases of development activity: Development, Test and Production. For in-depth discussions of branching strategies, see Microsoft Team Foundation Server Branching Guidance: [LINK] or PDF ( license)


Whichever configuration you choose it's imperative that all team members are on board and fully understand the workflow and best practices associated with tagging, merging, etc. Symptoms of problems with team member's comprehension of the strategy, or of having chosen the wrong structure include avoidance of merging, deferring merges until the very end of the project and general confusion about versions. A longer list of "symptoms" can be seen here [LINK]
The branching structure used in the following tutorial is a task-based branch, where the branch is used for work on a specific tasks. These are short-term branches which are merged back into the main line of development - the Trunk - as soon as the task is completed.
With Fall coming, it's time for Corn mazes. There are loads of them to find in Google Earth. Here are a few to get you started, including the Google Earth KML files. In England, they mostly do crop circles, and it'stoo late in the year, they're all harvested by now, but next year be sure to look for them.
Crot's Corn Maze (near Temperance, Michigan)
![]()
Download KML file for Crot's Corn Maze
Fun Acres Corn Maze (South Rockwood, Michigan)
![]()
Fun Acres Corn Maze and Family Fun Farm.kml
Helwig Farms (Monroe, Michigan)
Portland Oregon
![]()
Corn Maze in Portland OR.kml
Eclipse can be used for many types of development tasks. As a result the interface is almost infinitely customizable. This can be confusing. When you first open it, you will probably see only functions related to Java development, as shown below:
If you are not developing Java applications, but want to use CVS primarily to manage a website or web application you may want to rearrange the interface to better suit your purposes. What follows is an example of setting Eclipse up to manage an ASP/HTML web application.

icon, and select "Other" from the dropdown menu.






You may of course create your own perspectives to better suit your needs:
An instructional designer came to me with a Captivate issue the other day. She had converted a working version 3 Captivate file to version 4. The converted file no longer calculated scores properly. There was a walk-through tutorial showing how to use a new web application, then a short multiple-choice quiz with 5 questions. The setting "Show Progress" was turned on, so each question should show a label: "Question 1 of 5, Question 2 of 5, etc."
In version 3, everything had worked fine. In version 4, things looked fine in Edit mode, but at runtime, the numbering was off. The first question was numbered "Question 2 of 7" and it would increment from there.
I immediately assumed it was counting clickboxes from somewhere else in the file, but when I looked at the Advanced Interaction page, Add to Total was grayed out for all buttons and clickboxes, and Track Score was also not checked for any of them. So, I assumed it could not be the clickboxes. Thinking it might be some corruption in the question slides, I re-created all the quiz questions, and yet the problem remained.

One of my colleagues wrote that she had used the right-click feature in her latest Captivate project. When she previewed the file using Preview > Project, it worked fine. But when she attempted to publish she got the following message.
"2 click boxes of this project have the 'Right-Click property on, these click boxes will work as 'Left-Click' when Enable Accessibility is set true. Do you want to continue?"
The solution is simply to turn OFF accessibilty in the project's Publish Settings. (In the Publish window, click Preferences. It's under Project in the Category tree structure in that pane.
Thanks to RoboWizard for this tip.
If this is true, it could be revolutionary.
Fossils From Animals And Plants Are Not Necessary For Crude Oil And Natural Gas, Swedish Researchers Find
ScienceDaily (2009-09-12) -- Researchers in Sweden have managed to prove that fossils from animals and plants are not necessary for crude oil and natural gas to be generated. The findings are revolutionary since this means, on the one hand, that it will be much easier to find these sources of energy and, on the other hand, that they can be found all over the globe. [Link]
I don't know why it took me so long to figure this out! To use secure FTP (SFTP) with a hostgator account on a shared server, simply use port 2222.
Here's an example of how to set up the remote tab in Dreamweaver:
When there is a workforce performance or knowledge gap to fill, trainers understandably think first of using training to fill it. But is more training really always the best answer? Depending on the skills involved and the characteristics of the audience, process improvement, usability improvement, training or performance support may all be worth considering.
As instructional designers, when we are given a new project to do, we assume that training is the deliverable that we are supposed to create. Many of us are familiar with using the ADDIE proces to develop training: Analysis, Design, Development, Implementation and Evaluation. However the ADDIE model has an emphasis on training as its goal, and typically the analysis pretty much assumes this as a starting point.
The system of root cause analysis is a process used to determine the underlying causes of a problem or performance gap, and what kind of intervention might be best to resolve it. Root Cause Analysis answers questions like "What's the problem? Why did it happen? and What can be done to prevent it?" It may become apparent that the best solution is something other than training, such as process improvement or redesign, or perhaps the tools used in the process made more usable.
There are often several possible points of intervention to improve outcomes. One point which is increasingly the focus of attention is the actual point of performance of a task. This is the point where Performance Support comes in.
-- From The Use of Manual Job Aids by Health Care Providers: What Do We Know? a white paper by Elisa Knebel of The Quality Assurance Project.
--[LINK]
Electronic Performance Support systems are usually online or desktop applications that are "structured to provide immediate, individualized on-line access to the full range of information, software, guidance, advice and assistance, data, images, tools, and assessment and monitoring systems to permit job performance with minimal support and intervention by others."
--from Electronic Performance Support Systems by Gloria Gery
Website: introduction to EPSS concepts
"A civilization without instrumentalities? Incredible." --Forbidden Planet
Fortunately, researchers have been working steadily for decades to make computers and communication devices nearly vanish by embedding them into our surroundings and networking them so they can sense the environment and interact with us in a manner that would integrate better with our living and working situations.
Early research groups developed scenarios to demonstrate the utility of these systems, which included independence support for the elderly, meeting facilitation, augmented driving and enhanced social interaction.
In 2008, the working group on Ambient Computing from the European technology consortium InterLink published a paper summarizing the state of the art in Ambient Computing, focusing on the social awareness of systems and privacy concerns. Included in this paper are a history of projects and use-case scenarios and a description of the original vision of ubiquitous, pervasive networks based on many "invisible" small computing devices embedded into the environment. These "smart ecosystems" of devices were to provide an intuitive user experience, enabling new types of interaction, communication and collaboration:
"...the degree of diffusion of smart devices...will result in smart ecosystems that might parallel other ecosystems in the not too far future.
... a major challenge will be to orchestrate the large number of individual elements and their relationships, connect and combine them via different types of communication networks to higher level, aggregated entities and investigate their emerging behaviour."
Similarly, an MIT project called Oxygen, working around 2000-2003, focused on Intelligent Spaces, which would sense the presence of people, their tasks, and even their attention and react appropriately, and Mobile Devices, which presaged today's smart phones to create connections to the physical world through a cluster of technologies, like cameras, sensors, networking, accelerometer, microphone, speaker, phone, GPS, etc.
Examples of smart artifacts include the HelloWall, a wall-size (but somewhat primitive) large ambient display that communicates with a sort of flexible dot-based code based on who is nearby, the ViewPort, a mobile handheld device which can communicate with other items in the room, including the HelloWall, and a variety of telepresence devices such as the MirrorSpace and VideoProbe.


Scenarios from the MIT Oxygen project included: a business conference involving people in different countries coordinating a meeting in London using different languages and automated scheduling and travel planning, as well as navigational and data assistance once they arrive, and a "guardian angel" which allows aging-in-place by providing memory and safety support to elderly people living independently.
Over the last decade, many of these scenarios have been at least partially realized, often through the use of device configurations not precisely forseen by the early ambient computing thinkers, such as smart-phones, GPS's, RFID's, and blue-tooth, as the market determines the dominant technologies. These configurations emphasize embedded devices less at this point and portable devices more, but the end result is the same.
Smart-phones bristling with data inputs from GPS, accelerometers, networking, video cameras, microphones, keyboards, and multi-touch have accelerated the evolution of pervasive computing by allowing mashups between all of these technologies, resulting in what could be called augmented intelligence for daily living.

- When use of a job aid would damage credibility
- When speedy performance is a priority
- When novel and unpredictable situations are involved
- When smooth and fluid performance is a top priority
- When the employee lacks sufficient reading, listening, or reference skills
- When the employee is not motivated
The Minority Report gestural interface

Video of the Minority Report interface in action
In a pervasive computing scenario, devices sense or communicate with people in order to provide contextually appropriate services. The interface is a key component in these scenarios since a primary goal is to make interaction with the devices as natural and intuitive as possible. An astonishing amount of creativity and innovation is being directed at the problem of interface design and the results will be life and work-transforming.
Some of the more interesting modes of human-computer interaction that have been tried include directed attention, voice, gesture tracking, motion (body motion through space or motion of the device, like the Wiimote or the Siftables shown below), haptic feedback-enhanced touch, augmented reality and even brain-computer interfaces. Some of the most exciting and innovative interfaces are shown below. With devices like these, I think we'll be reaching more tipping points in the workplace very soon.
Interactive Floor Projection Screens
These systems project images on the floor, and use a camera to track body motion across the surface. You may be familiar with these displays from seeing them at malls and theaters, but they could be used to select files, control other devices, etc.
This video shows an interactive floor installation at a Japanese art gallery, showing its use as an interface for retrieving information about works of art. For more information about building this type of interface, see the Natural User Interface Group for setup instructions and code.
Multitouch interfaces (like the one on the iPhone) are used in walls, tables, and smartPhones. Microsoft Surface, a multi-touch table that allows multi-user, fine-grained control of objects on screen, also interacts with real objects and other devices using cameras and wifi. This is a great example of "smart ecosystems" of intelligent devices which sense the status of the environment and lower barriers between people and their information. An example is shown transferring an image from a camera to the Surface table to a smartphone, simply by laying the camera and smartphone on the table.
Touch Wall
An intelligent whiteboard that uses cameras to track hand motion across a vertical display allowing zoomable, panning navigation through information in a non-linear format.
In February, 2009, Pattie Maes of MIT demonstrated a project called "Sixth Sense" led by Pranav Mistry which uses an inexpensive wearable and projector to enable amazing interactions between the real world and the world of data. Using natural gestures to interact with any surface, users can manipulate data, view information about products, people and ideas in the world around them. Th revolutionary feature is its ability to act as a Sixth Sense for metadata of real objects, through barcode or facial or product recognition, combined with realtime feedback to the user through projection.
Siftables are physical tiles which form a smart ecosystem for manipulating data, in which each tile can sense the nearby tiles and what is on them. Tiles can be programmed to recognize proximity to specific types of tiles, so for instance "face" tiles can change their expressions as they move around the table, closer or farther to other faces. The possibiiities for this type of interaction with real world data represented by avatars that can react to specific attributes are endless.
Interactive Window
An example of intelligent devices controlling ambient lighting and mood.
An installation in a Greek museum showing the use of novel interfaces in games.
Sony's Revolution
Smart tiles somewhat akin to David Merrill's concept. These tiles form a sort of object oriented grammar where one tile modifies the parameters of the subject of another tile.
Some innovative interface ideas from TAT