Wednesday, January 28, 2009

Resource leveling

Resource Leveling - why
First, a simple question. Why level resources? Simply put, if resources are not leveled, resources are either overallocated or underallocated (or both). Overallocation means the scheduled work is not likely to occur (or the resource may be annoyed, overworked or burned out). Underallocation means suboptimal utilizing and billing of resource, lowering of profitability. Lastly, your schedule won’t be accurate, unless resource are correctly loaded.

As a rule, I’ve always used the Dependency Driven approach, where most every task is linked explicitly. Simply put, the Dependency Driven approach is using dependencies (predecessors) to sequence work to be done by a given resource. The common myth is that resource leveling never works in the real world, breaks project plans, and should be avoided like the plague. Much of this myth is due to lack of understanding of how to utilize the tool at hand (our friend, MS-Project). To start with, the Dependency Driven approach proscribes an overuse of dependencies which masks the actual work-related dependencies. In all fairness, not every project is a candidate for resource leveling, in fact for resource leveling to be effective, the underlying plan already constructed should already have the following characteristics:
- Minimum number of tasks with more than one resource assigned
- Removal of any constraints that are not absolutely necessary

What’s common wisdom for Resource Leveling?
- Never click automatic resource Leveling
- Always save your plan before even thinking of resource leveling
- It messes up your plan, throwing tasks in random order, and blowing dates
- It’s hopelessly broken and unworkable



Well, let’s explore Resource Leveling and utilization, and you may find my analysis below can help save huge time and effort in using Level Resources to optimize your plans. Let’s start with how MS-Project reports overutilization.

Crying wolf: how MS-Project over-reports resource overallocation
Has MS-Project ever identified resources as overallocated while you believe the plan is correct? Well, the calculation engine in MS-Project is somewhat primitive and can over-report utilization. As an example, I created a plan with three one hour tasks that occur on the same day. MS-Project immediately flags me as overallocated, even though I only work three hours in an eight hour day. That’s because MS-Project schedules all tasks to start at the same time of the day. Let’s have a look at a simple 3-task plan:


Here’s the resource sheet, note I am “red” meaning overallocated:


Here’s the resource graph:


Why am I overallocated? Let’s take a closer look at the default resource scheduling by MS-Project by looking at the Resource Usage view, looking in 15 minute increments at how these tasks get scheduled by default:
Clearly, the tasks are all scheduled to start at 8am.
Interestingly, if I use “Level Resource” it will not fix the problem, unless I set a high level of granularity, by telling the leveling engine to fix problems minute-by-minute or hour-by-hour. Note the default “day-by-day” leveling will not fix the problem. Here’s how it looks after I level on an hour-by-hour basis:
Note the red is gone, indicating there is no more overallocation. The actual calculation engine will report overallocation if a resource has to work for more than 60 seconds in any one minute of a project. This is because the scheduling and calculations are a bit simplistic, combined with MS-Project natively using one-minute increments for calculations. There are three basic approaches to resolving this:
  • Tools/Resource Leveling
    You will need to set fine granularity (hour-by-hour in this case) for leveling.
  • Edit Task Usage
    View/Task Usage and change the minor time scale to hours, then manually edit the working hours so there is no overlap.
  • Adjust units
    Set the Resource Units of the two tasks to be 50% on both so when there is an overlap the maximum is 100%. Make this edit by selecting Window/Split and change the units in the Task Entry form that appears in the lower window. The man-hours of work can be edited here as well to more realistic amounts if appropriate.

Resource Leveling –approaches
There are really three approaches to creating a project plan that has resource appropriately allocated. Your choice strongly affects the effort you will put into the plan up front, and how much effort needed as you manage the plan. MS-Project indeed can be frustrating or rewarding, depending on how it is used and how well you understand its behavior.



How to approach leveling
Prior to leveling (or more likely, to tune the way Level Resources works), you may want to set the task priorities. Just add a column for “Priority”. Priority is an indication of a task's importance and availability for leveling. Tasks with the lowest priority are delayed, and those with the highest priority tend to be scheduled earlier. The priority value that you enter is a subjective value between 1 and 1000, which enables you to specify the amount of control you have over the leveling process. For example, if you don't want Project to level a particular task, set its priority level to 1000. By default, priority values are set at 500.
In most cases, consider leveling overallocated resources only after you have entered all information about task scheduling and resource. In some cases, you might want to level resources, examine the outcome, and then adjust other task and assignment information.
When entering schedule information for your tasks, keep the following in mind to make sure MS-Project schedules your project accurately, and to help prevent unnecessary resource overallocations. Here are some tips:
  • Task Dependencies
    Use task dependencies to reflect a necessary sequence of events. Don’t overuse them, as each additional dependency can result in resource underutilization or completion date slippage.
  • Constraints
    Avoid inflexible constraints, except where necessary. Inflexible constraints are those tying a task to a date. The inflexible constraints are Must Finish On and Must Start On date constraints. You can specify that a task must start on or finish no later than a particular date. Note such constraints limit the adjustments that Project can make when determining which tasks to adjust when leveling resources.
  • Priorities
    These values act as “hints” to the resource leveling engine that affect the schedule sequencing of tasks. Tasks with the lowest priority are delayed or split first. Use a task priority of 1000 (meaning do not level this task) only when a task absolutely cannot be delayed. 500 is the default value.

Some recommendations on settings when leveling resources:

  • Never use “Automatic”
    Manual is best, so you don’t get blindsided by automatic changes. Plus automatic changes can be slow in a large plan.
  • Start with “Day by Day” for overallocation
    If you need to eliminate all overallocations, go with minute-by-minute.
  • Enable “Clear leveling values before leveling”
    Otherwise you will accumulate delays and elongate the plan
  • Level the entire project or from the current day forward
    You may wish to delete leveling for a range of tasks, which you can easily do afterwards.
  • Use “Priority, Standard” leveling order
    This allows you to use “Priority” as a hint to task sequencing. See the section below for more information on what is going on in the engine for this setting.
  • Enable “Leveling Can Adjust Individual Assignments On A Task” only if you find that resources are insufficiently allocated
    This setting allows multiple resource assigned to a task get their day-to-day assignments tuned to optimize completion of the task. This setting will have no effect unless multiple resource are assigned to one task.
  • Disable “level only within available slack” if you want some meaningful resource leveling
    Note this enables the engine to adjust your completion date.
  • Disable “Leveling Can Create Splits In Remaining Work” unless you are ok with work getting stopped and started at random to create full utilization
    Bear in mind there are stresses and task switching overhead for human beings that get yanked between tasks seemingly at random.



What “Leveling Order” does
There are three values for leveling order, which drive how the underlying leveling engine performs. Here’s what’s going on for each of these values:


My recommendation? Only use “ID Only”, I use a minimum amount of adjustments to “Priority” values, but then use “Priority Standard” so my priority settings then get taken into account.
When MS-Project levels resources, it only delays tasks (or splits tasks if you have allowed that). It does not:
  • Reassign tasks
    It is up to the PM to make the resource assignments.
  • Optimize a resource's allocation
    Because leveling does not move tasks earlier or reassign units, a resource flagged as overallocated might become underallocated as a result of leveling. If you have multiple resources assigned to each task at varying units, you are likely to have underallocated resources, as MS-Project will level to ensure there is no overallocation.
  • Reassign units
    For example, if I am assigned to work on two tasks that are both scheduled at the same time, leveling won't change my units so that I works on both tasks at 50 percent.
    When you're ready to have Project level resources, on the Tools menu, click Level Resources. To accept all the defaults, click Level Now.

After Project finishes leveling an overallocated resource, certain tasks assigned to that resource are split or delayed. The split or delayed tasks are then scheduled for when the resource has time to work on them. You can see the results of leveling in the Leveling Gantt view, which graphically shows preleveled values compared with postleveled values, including newly added task delays and splits.

The effect of delay on leveling and scheduling
Leveling delay lets a project manager precisely manage the start and end of every task without adding dependencies and without fixing the start date. Leveling delay is a hidden field that exists at two places: the task level and the resource-assignment level:
  • Task Level
    Task level is the easiest to apply, but the hardest to manage. Simply add the “Leveling Delay” column to the Gantt chart view, or use the “Leveling Gantt” view. Enter the number of hours or days delay, and the entire task shifts out in time. Note the delay must be in elapsed days or hours. Elapsed days do not honor holidays and non-work time. A project manager must often adjust these delays manually as task start and end dates shift.
  • Resource Level
    Resource-assignment leveling delay can be entered for each resource assigned to a task. It can be entered in work-hours or work-days. It provides flexibility to level out any imaginable work. If two people are assigned to a task, but only one is over allocated by 4 hrs, delay the over allocated resource by 4 hrs. The other resource’s start date is unaffected. To delay the entire task, though, every resource assigned to the task must be delayed. Make maintenance simpler; allocate one and only one resource per task.

Leveling delay is measured from the dependencies of the task. For instance, if task #4 and task #5 are both dependent upon task #3, ending on Monday, they will both begin on Tuesday with no delay. Putting a two-day delay on #5 will make task #4 start on Tuesday and #5 start on Thursday. If the end-date of #3 moves to Tuesday, they will both roll forward to Wednesday and Friday start-dates.

By combining units (% allocation) and leveling delays, a project manager can accurately represent complex work. People can work multiple tasks simultaneously, participate in multiple chains of dependent tasks, and still work all available work hours, and not one hour more.
“Level Resources” adds a “Leveling Delay” that is visible as a column (field) in MS-Project. This is shown in Elapsed Days (edays) that represent calendar (elapsed) time, and not work time. You can use the short hand “ed” (such as 5ed) to represent 5 elapsed days, and you can edit these fields, either to manually remove or adjust the output of leveling resources.

Some observations looking at the Leveling Gantt view, looking at “Leveling Delay”:
1. Task ID 2 and 3 were slipped within the same day to avoid overallocation
2. Task ID 7 has a predecessor Task ID 4, so the later has no green trailing line
3. Since priority was not set, the tasks are generally in ID order

Task level delays described above move an entire task with all assignments. A more fine-grained adjustment is to insert delays at the resource (assignment) level within a task.

In summary, Level Resources requires some understanding to use effectively, but it can save you significant effort in tuning a plan to effectively utilize resources to achieve your project objectives.

Sunday, January 25, 2009

Data Configuration Management

Over a period of decades IT management has developed a mature discipline around Software Configuration Management (SCM). The growing challenge in IT is that data has assumed an increasingly common role of controlling and affecting software logic, yet often sidesteps the rigorous change management controls that are in place for software.

What is Software Configuration Management?
Simply put, this is process of tracking and controlling changes in software. Configuration management practices include revision control and the establishment of baselines. From a management perspective, this allows for control over what changes are made, what are the differences between software versions, and how to roll them back. A robust set of tools for managing changes, versioning software components, comparing source code, and automating builds exists.


The power of data
Users have always been able to change data via applications. However there are certain categories of data I would argue need to be managed with the controls already devised for software:

  • Business Logic
    Configuring systems to take a different courses of action based on data settings. What was once hard-coded can be exposed to end user configuration.
  • Rules based systems
    These are systems are explicitly designed to formalize rules, generally eschewing code for data. It is an easy step to enable end users to edit the rules.
  • Templates
    These affect display, formats, transformations, and any number of system inputs/outputs. These include XSLT.
  • Dynamic data
    It is considered extremely chaotic to have code actually change code real-time. This rarely occurs on-purpose outside academia. Code is typically considered static and unchanging. However Configuration Data is easily changed real-time.


The heightened challenge of data
Data typically is not retained with the same rigor as source code:

  • Data often exists as a point in time
    When a data element changes, the history is typically not retained. Databases commonly do not allow for native rollbacks in the way source code management systems allow.
  • Data does not allow for comments
    Source code (based on the underlying language) allows for in-line comments. Data almost by definition does not support comments.
  • Data versioning today requires point-solutions
    If you want a history of data changes, it typically requires the addition of a dimension for each element. This is custom coding, and in the pressures of today’s development and the need for performance, the data cannot easily be versioned.
  • Data is undated
    Examine a database; can you tell when a given field within one record was changed? And by whom?
  • Data does not execute serially
    What makes source code relatively easy to walk through is the sequential and serial nature of executing instructions. This is a part of the Von Neumann Architecture; this architecture first described by the visionary computer scientist John Von Neumann first segregated data from code, and described a control unit (today a CPU) for serially executing code. This leads to the next point.
  • Data has no meaning without context
    The interpretation of data is left to the reader. The data by itself can be asserted to be meaningless without the context of its definition, purpose, restrictions, or values. Source Code by definition can be interpreted through the lens of the compiler or interpreter for which it is explicitly designed.

There are several gradual trends that have lead to data driven systems taking on aspects of source code:

  1. Cloud Computing / SAAS
    Software As A Service may just mean hosted application software, but if you consider when there are economies of scale via multi-tenancy, you have multiple instances of the application, serving many customers. Each customer may have the software configured slightly differently, and herein lays the rub. Customization is often done via data configuration settings, and those setting need to be treated with the same care as source code.
  2. Need for Speed
    Business desire to move quickly conflicts with configuration, change and release management which imposes control disciplines. Business users gradually realize they can bypass the iron grip of these processes by changing configuration information.
  3. Template standardization
    As software is commoditized, templates become standardized; both within applications and within the industry. This provides a rich and powerful capability for transforming software behavior without programming or touching source code.


The crux of the issue
There has long been a focus on Intellectual Property (IP), data protection, ensuring against data loss, and data availability, but what about managing changes to data that can impact the organization through how systems behave? Are changes tested and tracked, approved, and deployed on a schedule, with the ability to undo? Is there a systemic way to associate system behavioral changes with control data changes?

Network and security specialists have long grappled with this issue, as firewall rules and network routing are commonly data that has the capacity for significant impact on the organization. Access control related to financial data has taken on stricter requirements as a results of SOX. However Configuration Data remains the 900lb elephant in the room many prefer to ignore.

Best practices
1. Limit users that can change “Configuration Data”
2. Ensure there is a log of Configuration Data changes
3. Ban Configuration Data that is changed dynamically by an application.
4. Demand the ability to roll back from any change, and retain the control of roll-back within IT
5. Segregate roles of those that change data, and those that approve the changes
6. Have data changes made first in a test environment
7. Consider using QA to vet changes
8. Define Configuration Data clearly, and put Configuration Data changes through existing change control processes
9. Establish policies that reflect the above, publish policies and track that policies are accepted annually by all affected staff.

Thoughts for the future
A checksum can be used to verify that configuration data has not changed. Systems can be designed to report changes to configuration data, or even be designed to refuse to start, if it is preferable not to activate a system than to run a system with suspect data.

If Configuration Data is effectively indistinguishable from code in its effect, then why should Configuration Data be subject to less stringent controls than code? In summary, Configuration Data needs to be managed with the same level of control as source code. To be effective, on e needs to first define crisply what Configuration Data is, so as not to swamp an organization with unnecessary controls.

Sunday, January 11, 2009

Organizational culture: The Process Dimension

To be effective within an organization, the IT manager needs to understand the organization, deeply. Experience (and grey hairs) give an IT manager the tools for understanding and synthesizing an organization’s culture both to function effectively within it, and to chart out and implement any changes that are both needed and feasible.


What is “Organizational Culture”
Organizational culture is the personality of the organization, and as in people, personality is deeply ingrained and not at all easily changed. Culture is the aggregation of behaviors, values, beliefs, norms and tangible signs of organization staff behaviors. Certainly the industry plays a part, as well as staff and location diversity, as well as the organization’s stories. A simple look at the “Help, About Us” on a company’s website can provide telling insights. While there are many ways to examine the culture of an organization for the moment let’s consider the cultural compatibility with a process orientation.


The reluctant organization
There exists a world-view that process is simply bureaucratic overhead that gets in the way of actual work. Amazingly in today’s day and age there exists organizations and pockets of IT that have inculcated this belief system. Symptoms include endless fire-drills and wringing the last bits of overtime out of beleaguered staffers under increasing stress and pressure. Introduction of process from the bottom-up is destined to fail. Such organizations find they cannot scale up, suffer high-turnover, and inconsistent quality. 3,000 years ago King Solomon writing under a pen-name once said “What is truly crooked cannot be straightened”, which is as applicable as ever. As long as this weltanschauung (world-view) radiates from the top of the organization, the organization will not change. As the wry Russian expression states “The fish rots from the head”.


Auto-industry
Let’s briefly examine the evolution of the post-war automobile industry. The concepts of creating a science out of production by instituting systematic process with solid statistical foundations were being charted by the two great men W. Edwards Deming and Joseph Duran. U.S. companies rejected the concepts, but Japanese manufacturers adopted the principles, leading Japan to reverse their reputation for shoddy goods, and ultimately overshadow the U.S. car industry by adapting and implementing TQM (Total Quality Management). This exemplifies the power of rigorous process, as well as the feasibility of organizational cultural change, when the will to change and top-level support exists.


How to read an Organization’s culture
Before taking on a role within a company where you will drive process (such project management, program management, audit, PMO), first look for the clues of acceptance. Here’s what to look for and some probing questions to ask:


The Hercules Mythology
When an organization collects and nurtures irreplaceable people, it can be indicative of poor cross-training, lack of delegation, and possibly staff that become unmanageable as they start to develop a sense of self-importance that be caustic to staff morale. One symptom of this are myths around key individuals that typically include herculean efforts. Often people eager to achieve this kind of stardom allow problems to fester until a crisis develop, so they can pull an “all nighter” to “save the day”. Staff can become addicted to the adrenaline rush, leading to avoidable crises.


Organizational hierarchy
A strict hierarchy, with command/control, and no cross-silo collaboration can indicate duplication within each department, and lack of communication that creates avoidable waste. When taken to an extreme, this organization structure often is associated with the philosophy that “management knows best” and often is exhibited with overworked managers, and staffers with insufficient information to be effective.


Beliefs around past failures
Why has the organization failed in the past? If the answers are around having the wrong people, then the belief system is that hiring the right people will solve the problem. Such organizations may have to overpay to hire overqualified individuals, who through brute force achieve goals, yet do so in an unpredictable and irreproducible fashion.

Scapegoats
High turnover is another symptom that a core organizational belief is that the organization would achieve greatness if only they had the right people. Consistently high turnover over time raises the question of whether the problem is the people, or the organization. A similar symptom is consistently high turnover for one job or role. This can indicate that a role is set up for failure and cannot possibly succeed. One wonders if this adage can be applied to organizations as well as people “one definition of insanity is doing the same activity repeatedly, yet expecting a different outcome”.

Overtime
Is overtime common? Overtime may be used to compensate for poor estimation, overoptimistic commitments, poor and rapidly changing prioritization. Overtime is common in IT, but one needs to be aware that one can work 10%, 20% even 30% overtime, but cannot increase productivity by 200% or more. Plus overtime is self-defeating, as staff productivity will degrade due to impacted morale and sleep deprivation.

Lessons learned
Does the organization engage in any kind of post-mortem or lessons learned exercise after a project? No matter how informal or inconsistently applied, such a practice within an organization gives hope that in its desire to learn, it acknowledges there are indeed lessons to learn, and ways to work better, opening the possibility of gradually introducing a process orientation.

Summary
In summary, a process orientation in a corporate culture is not just a recipe for quality, consistency and scalability, but is a foundation for organization and personal success, and may well be a core competitive competency, and barrier to competition and market entry for less enlightened organizations.