Saturday, January 1, 2011

Evaluating PPM solutions

Portfolio Project Management (PPM) is an enterprise solution class for governing the full project lifecycle. One solution tried was Changepoint from Compuware. The following summarizes the issues in the core software that lead to project failure and replacement. I share this so others can consider these factors in evaluating any PPM solution:

• Performance
Reports take an inordinate amount of time to run. Reports that span months and projects time-out. Export of reports takes a long time or fails for larger reports. Sorting of report data within a report takes an extraordinarily long time. Large reports produce out of memory errors and lock up user PC.

• Baselines not supported
Changepoint only supports one baseline, and has very limited support for baseline of budget/schedule at project and task level.

• Reliability
Users encounter random errors, lost data, and cryptic warnings. The MS-Project plug in self-corrupts. Reports self-corrupt. Custom SQL in reports gets spontaneously dropped. PMs are forced to repeat changes to plans after the MS-Project plug-in crashes with obscure errors.

• Administrative overhead
Maintaining plans, users and reports requires an excessive level of support. The effort to add users, and manage the system is far greater than under the previous tool (MS-Project Server). Each plan created needs to manually be opened one at a time to each user in the company. Attributes such as Application can only be assigned to tasks one at a time.

• Inflexibility
Once a project is allocated to client or an initiative, it can never be changed. Once a budget is frozen, it can never be changed. MSP component is backward and forward incompatible between releases and even service packs.

• Usability
The user interface is outdated, and difficult to use. Most drop-down menus require several clicks to populate the menu before it can be used. The user interface confounds users and flies in the face of industry standards. Scheduling resource terminations or effective dates triggers a cascade of problems rendering these features unusable. Project Managers have been unable to effectively plan for effort and cost within plans, as small changes to plans result is huge unexpected changes in schedule and cost.

• Real-time
The system requires SQL batch jobs to both bring data up to date, and correct data problems. Core financial data is often out of date, requiring administrative staff to run batch jobs on request.

• Licensing
When there are insufficient licenses, additional users cannot be added even if only briefly, which interferes with resource setup, such as when resources need to be activated to run a report on their time. Even if just one out of 310 features have no available licenses, Changepoint prevents the user from being added, during which time some feature licenses remain allocated. We believe the license tracking system sub-optimizes license allocation and unnecessarily allocates multiple licenses to users. The internal admin account consumes a power user license which cannot be given to users, yet is the most expensive license available. The licensing scheme is needlessly complex to the point of being unmanageable, with 310 different license-tracked features. Despite careful attempts to calculate and track licenses, we have been surprised repeatedly by the need to purchase more licenses.

• Architecture
The Changepoint architecture is outdated and monolithic , combining all client data into one database with the core product. This increases maintenance, complexity and impacts reliability. The reliance on batch files is problematic, as the data is at risk of being out of date, pending a batch job run. Vendor provided reports are hard-coded and not user editable.

• Budgeting
Budgets cannot be adjusted historically. The tool cannot accommodate the company budget process without inordinate effort, along with difficult and obscure adaptation of tool features.

• Integration
The reports are not easily integrated into the presentation layer of other tools for summary and dashboard reporting. The ability of Changepoint to work with MS-Project, which is our standard project management tool, is severely limited. Many fields do not get supported within MS-Project. Single-Sign on fails when a user changes their password. LDAP integration is weak.

• Tracking of converted resources
Changepoint by design cannot accommodate and report correctly on resource conversions from consultant to employee.

• Timesheet summary tasks
Users cannot see the summary task for their timesheet tasks. The causes user confusion, and Project Manager headache to make every task readable as the summary tasks are not visible.

In summary, proper evaluation is absolutely critical to PPM project success. Forewarned is forearmed. I won’t make the same mistake twice. Heck, I wish I didn’t make the same mistake once!

Sunday, December 6, 2009

IT job market transformed: 21st century view

Even when I am in the same job two years in a row, I find my job is completely different. I’ve been saying that this entire decade; constant change is the one thing that doesn’t change.

The skills and technologies I am compelled through my work to pick up, are often those I never envisaged I would need, and possibly never even heard of the year before.

The very kinds of unique technology skills that can make you indispensible are those that 15 years ago made you un-promotable, in a forgotten era of near-guaranteed employment. As a well meaning manager once explained to me earlier in my career “We’d love to promote you, but you’re the only one who can do the job you are doing today”. That lead me to change modus operandi and work hard to train my successor.

In a hierarchical organization, where promotion is the only path to success and better compensation, finding and training your replacement was central to moving forward, gathering titles and heading towards that gold-watch retirement after untold decades of service to a unitary employer. Today that needs to be supplemented with a near contradiction of ensuring you have highly desirable and even unique skills.

In his book The World is Flat, New York Times columnist Thomas Friedman presents a view of the future in which evolving technologies will level the playing field for business owners worldwide. Traditional corporate hierarchies are being replaced by highly specialized online communities sharing similar business interests. This is the people-level view of the disaggregation of the corporate monolith as outsourcing and SAAS hollows out companies creating flexible business shells that internalize their core competencies and outsource everything else possible. Do the jobs disappear? No, but they move around dramatically. There aren’t fewer jobs, but they are eliminated and created at a dramatically increasing rate.

According to Friedman, to survive in this ever-flattening world, individuals must diversify their skills so that they remain viable competitors across many different careers. Those who do, those who attain a level of specialization that cannot be outsourced are, he claims, "untouchable." So if you want job security, join their ranks. Become an “untouchable" now.

And if you don't? The fallout from such dramatic technological change may mean that those who haven't kept pace will lose the race for 21st century jobs.

How do you gain these skills? It’s no longer a 9-5 world. Showing up with a suit, briefcase and sipping your coffee is not just insufficient; it is an anachronism. Those who gain these skills are either lucky, or work hard at it. The later live and breathe their technology skills and accrue them with passion.

Thoughts?

Joel

Saturday, February 21, 2009

Data Warehouse Project: Lessons Learned

Summary
The following summarizes how I rescued a data warehouse project that was on track for a disastrous failure. The Purpose? To provide valuable guidance to assist Project Managers on existing and future Data Warehouse projects, and to ensure that the same mistakes don’t happen twice.

Those who cannot learn from history are doomed to repeat it. - George Santayana

Lessons learned
This assessment summarizes the lessons learned from joining a project already millions over-budget and years late. The identified insights from this project have been segregated into three categories:
A. Actions immediately to rectify the project
B. General project management lessons learnt
C. Data Warehouse specific lessons


Actions immediately to rectify the project

  • High level view
    Single page dashboard of sub-projects and their status and dates.
    Single page dependency plan created to highlight areas of critical path and cross organizational dependencies
    A single composite project plan was constructed for the first time.
  • Coherent Communication
    A communications plan was created and published that defined roles, responsibilities, escalation and communication process.
  • Active project management and Coordination
    - Daily meetings (internal and with vendor)
    - Monthly meeting in person with vendor
  • Phased approach
    An achievable first phase that provided maximum value within an aggressive timeframe was scoped, and universally agreed by Executive Management, Marketing and Business leaders.
  • Tracking
    Migration from a flimsy unstructured “HotList” to a formal Issue Log
  • Risk Management
    A comprehensive risk assessment and mitigation plan was created and published to look ahead to plan for and manage contingencies.
  • Vendor Project Plan
    Demand for vendor to redo their project plan to:
    - Correct weaknesses
    - Archive of completed phases/tasks
    - Rewrite to map precisely to new Program Dashboard
  • Specs shared
    Amazingly vendor specs were hidden from the client.
    Acquired specs from vendor enabling review, feedback, and scheduled fixes
  • Synchronizing plans
    A single consolidated project plan did not exist. A process was put in place to frequently synchronize vendor plans for accuracy and to manage critical path and dependencies.
  • Milestone and deliverable management
    Compelled vendor to add all short term project deliverables to the Issues Log for single location tracking.
  • Resources
    A Business Analyst (BA) was brought on board and dedicated for duration of project
    PM resource and role changes
  • Vision document
    A vision document was created to define the goals of the remainder of the project in a phased approach.
  • Vendor relationship management
    Face-to-face visits with vendor were increased in both locations, and frank discussions uncovered and addressed a range of inefficiencies, concerns and confusion.

General Project Management

IT involvement
Belated IT involvement in vendor selection, management requirements and specifications resulted in a selection without IT’s capability assessment, poor coordination, and lack of project support within IT.
QA (including requirements, quality standards as well as IT QA role) were not defined.Unlike almost all other ongoing IT projects, there was no QA involvement throughout the project. A cross-functional team of IT and business QA should have been put in place early in the project to guide the quality requirements, define the test criteria, and complete testing.

Phased project approach
This project was conceived, planned and managed as a single scheduled deliverable to the business. No phases, no interim deliverables. A “phased” approach is useful because:

  • A phased approach allows early delivery of highest value components to the business
  • It allows team members to break the project down into more manageable segments.
  • Each phase can be brought to some sense of closure as the next phase begins.
  • Phases can be made to result in discrete products or accomplishments to provide the starting point for the next phase.
  • Phase transitions are ideal times to update planning baselines, to conduct high level management reviews, and to evaluate project costs and prospects
  • Each phase can be a “gate” for evaluation for engaging in the subsequent phase, based on actual costs, delivered capabilities and realized value
  • A Proof of Concept (POC) is an even more cautious approach to demonstrating vendor, technology and architecture viability.

Scope definition

  • Project scope was inadequately defined
  • Weak and incomplete requirements
  • Defined criteria for success
  • Moving target syndrome
    Also known as “Scope Creep”
    Quality and thoroughness of testing are examples of evolving targets. Specification review was done in the final months of the project, leading to further delays and scheduling of development fixes as a result.

Estimation

  • The project was not realistically estimated with input from all involved parties.
  • There was no robust estimation of the full project cost.


Roles and responsibilities
Roles and responsibilities for vendor and client were not clearly defined, creating crossed wires, confusion and tension. Examples of fundamental roles not clearly established until the end of the project include:

  • Precisely who assesses reports, decides and approves data quality.
  • Who makes decisions, across the range of areas; leading to vendor frustration

Custom work done instead of purchasing an available vendor product

A significant amount of custom coding was done to avoid the cost of product licenses. While licenses can be expensive, custom development carries huge risk and delays, as this project experienced.

Availability

  • Design vs. Requirements
    Data freshness and system availability were not clearly defined as requirements, and final design specifications were not checked against business needs.
  • SLA
    An overall IT Service Level Agreement, and sub-agreements for each vendor component did not exist.

Vendor Management

  • Insufficient vendor management was not in place.
  • Vendor selection was not done through a formal RFI/RFP process with multiple vendor response, assessment, capability, proposal and pricing analysis. Effectively, a vendor was chosen on business user whim, without consideration of capabilities, or how the vendor would be managed.
  • The Vendor was allowed to manage to their own SOW/Internals, and not to the project deliverables
  • The vendor contract did not outline deliverables clearly
  • The vendor contract specified a long term lock-in, constraining the ability of the client to manage the project and vendor.
  • Structure of coordination, design, operational management and escalation responsibilities between vendors was not clearly thought through
  • No single point of internal contact for vendors resulted in delayed and ultimately wasted vendor effort and frustration in seeking guidance and clarification.
  • Vendor permitted not to share specifications with the client, preventing client IT from identifying and correcting mistakes before coding.
  • Vendor was not shown the larger enterprise view of the overarching project
  • Prioritization and Resource Planning
    Vendor did not publish dedicated resource assignments, for PM and business to guide sequence and priority.

Resource Management
a. No visibility into resource constraints and utilization
b. Client resources were committed to multiple concurrent projects without a means for scheduling or prioritizing efforts, or managing and escalating conflicting resourcing needs.
c. IT Team members had dramatically different perceptions of project priority.
d. The following IT resource should have been dedicated to this project:
• Data Architect
• Business Analyst
• Project Manager

Unstructured and chaotic Communication
a. All communications were done via email
b. All status, questions and updates were sent to all staff and executives
c. There was no formal structure or method to communication or escalation
d. No single point of communication within the vendor for communication as well as coordination and escalation. Effectively everyone at both client and vendor were spammed with a stream of details, drowning out useful communication and information.

Decision Making

  • Decisions were made without IT involvement
  • IT concerns and recommendations were overridden without sufficient consideration


Architecture
Much like Requirements creation, Architecture should fundamentally remain within the client organization with any outsourcing focused on analytics, development and service. This would ensure projects conform to long term architectural needs, and would retain the intellectual capital in-house.

Data Architecture
There was no overarching Data Architecture defined up-front, that encompassed the full set of applications, data sources, staging areas and repositories. A Data Architecture describes the data entities as well as how the data is processed, stored, and utilized. There were seven data architectures that were integrated ad-hoc; which included canned applications.


Project Planning:
a. Multiple separate and partial project plans were used, rather than a single project plan, which prevented critical path management, adequate resource scheduling, nor delivery date visibility.
b. Project plan did not provide a roll-up of detail into summary tasks and milestones
c. Resources were over-allocated without review or revision
d. No critical path and dependency analysis and optimization
e. No clear definition of deliverables
f. The schedule was not created based on the project plan, and hence lacked realism and lost credibility over time
g. Detailed Project plan not aligned with business-oriented deliverables
h. Vendor project plan was not fully integrated into the client project plan
i. Vendor was not required to structure the project plan to fit business objectives
j. Task/project dependencies were not clear
k. The Project was inadequately defined, remaining requirements left to be defined later.
l. Inconsistent project phases, tasks, level of detail (who, what, how).
m. High level mission and objectives not defined and communicated

Project Management:

  • Passive project management resulted in schedule drift
  • Active project management techniques were not utilized
  • Status meetings
    i. Meetings were weekly, instead of daily
    ii. Vendor was allowed to set the agenda and manage the meetings
    iii. Meetings were informal, no agenda, minutes or follow up
  • Tracking and managing assignments and issues were informal
  • Existing culture did not value, commit or manage to task/dates
  • Costs were not calculated or communicated
  • Schedule updates were not frequently calculated and communicated
  • The project schedule was not communicated, made visible, or widely believed
  • Dates repeatedly allowed to slip
  • Enhancement list growth impacted schedule through scope creep, and not managed against original mission or separated into planned phases.
  • Inadequate Vendor Effort Oversight
    Vendor managed the plan, yet the majority of effort was largely unmanaged at the client site


    Data Warehouse Specific
    SLA
    SLA for data freshness, performance, availability, catch-up, load times

Set of data elements
The minimum set of needed Data Elements should have been carefully selected in advance, instead of loading all data into the data warehouse. Loading the full set slowed analysis and design as well as the initial and daily extract, transmit and load times.

Tools
Initial lack of tools for ETL, resulted in inefficient use of an external vendor for extracts.Tools may be expense, but manual (Handraulic) effort is more expensive.

Data descriptions and definitions
The in-depth meaning of data elements should have been defined, documented, and shared to confirm with vendor that the meaning of the elements is clearly defined and understood.

Location of data cleansing
Where data is cleansed should have been defined; whether it is at the point of extract, load or in the Data Warehouse itself.

Level of data cleaning
The required level of data cleansing should have been defined in advance.

Resource
Data architect not assigned or available. Project Manager was part-time. For a DW project, having a Data Architect, and a data model early on is key to success.

Environments
Only a single DW environment was planned for. This prevented concurrent testing/production. It also prevented concurrent used of the production environment while loads to a separate environment are conducted.

Data model
The overall data model was not documented in advanceData Analysis into definition, valid values, transformation, exception handling and testing was revisited after project was largely completed

Data history
Ability to understand how data was populated into the data mart, when the source data was delivered and whether or not the data was loaded manually or manipulated outside of the normal loading process.

Specifications
Vendor did not share specifications for review

Auditing
Insufficient auditing of the loads, provided little early insight into problems, which were left to be discovered late which drove up costs via rework.

Design best practices:
Traceability
Inability to trace data back to source
Counters in code should be avoidedCounters in code, rather than actual records, prevent accurate reconciliation and tracing of data in the data warehouse.

Transaction management
No ability to reconcile partial data writes with roll-forward or roll-backward for transactional integrity.

Limited consistency checks
On data being loaded into DW. Robust and comprehensive consistency checks are recommended.

Insufficient test scripts
Daily load test was determined to be entirely inadequate for business objectives.

Primary keys not generated from sequence generators
Primary key created by incrementing the previous maximum value by one. This is known to be error-prone and likely to generate duplicate keys where uniqueness is required.

Rejected data remediation
No designed ability to review/correct rejected data.Exceptional conditions not captured or reported

Error Messages
Lack of clear definition for logging, wording and communicating errors in a consistent fashion

Error Handling
Lack of clear definition for handling errors and exceptions.

Wrap-up

Data Warehouse projects present a unique set of challenges. This large project was rescued from failure with quick focused action that represents some best practices that can be of use for other projects. It is my intent that this can provide some guidance to assist Project Managers on future and existing Data Warehouse projects, and to ensure that the same mistakes don’t happen twice. Heck, the same mistakes shouldn’t happen once ;-)

Thursday, February 5, 2009

Leveraging QA to improve IT Governance

In a nutshell
Quality Assurance (QA) faces a range of organizational challenges that limit its effectiveness. This impacts more than just QA and overall software quality, but erodes IT Governance, putting organizations at heightened risk. By tuning roles and responsibilities, QA can become a critical contributor to effective IT Governance.

The challenges
Left to fend for itself without sufficient active management support, QA faces the following challenges to its effectiveness:

  • Involved too late to be effective
    QA is often involved only after software is tossed over the fence from Development. Without early involvement during Requirements, QA is playing catch-up to understand the requirements. Lastly the opportunity for QA to shape Requirements to eliminate ambiguity and ensure testability is squandered.
  • Late before we start
    Organizations tend to resist date slippage long after the target date no longer is realistic. When Development is late, all too frequently the time allocated to QA is “compressed”, leaving insufficient time for testing.
  • Overruled by Development
    Development tends to have larger and more influential staff than QA, and is often vocal in their attempts to steamroll QA.
  • Version confusion
    Weak Software Configuration Management (SCM) can erode confidence in the system being tested. Is it the right version? Can it be rebuilt? Is it going to behave identically in Production?

Approaches to solutions
When in the process QA is involved and how much time QA is granted can be rectified in two general ways; via refined software lifecycle methodology that specifies both when QA gets involved and SLAs on reasonable, consistent and measurable time periods allocated for standard tests.

Empowering QA comes down to Decision Rights as part of IT Governance. It can start by simply granting QA a blanket veto over project deployments. No exceptions. The first time QA is overridden and code deployed is the moment when QA is emasculated, demoralized and ineffective.

The best way to gain confidence that the software deployed is the software tested, and the software can be rebuilt from source code is to revisit SCM. The answer is not another tool, but in changing roles and responsibilities to empower QA to manage SCM. My controversial recommendation is to move the Build process and SCM repository to QA. If QA is able to do the build from source, then there is an independent confirmation that the software can be rebuilt by someone other than one developer and his private machine; do you really want to entrust your project’s source code and build exclusively to a sandal-wearing night-owl clattering on his keyboard and guzzling Jolt Cola who seems a tad too attached to wearing the same clothing daily? It also removes Development one more level away from production deployments. In other words, if QA can do a build and deployment to the QA environment, Development is freed from documenting and supporting Production deployments. There is a cost to this; additional environments, software licenses, training, and documentation. But you will gain additional control and improved governance. It’s your system, manage it….

Wednesday, January 28, 2009

Resource leveling

Resource Leveling - why
First, a simple question. Why level resources? Simply put, if resources are not leveled, resources are either overallocated or underallocated (or both). Overallocation means the scheduled work is not likely to occur (or the resource may be annoyed, overworked or burned out). Underallocation means suboptimal utilizing and billing of resource, lowering of profitability. Lastly, your schedule won’t be accurate, unless resource are correctly loaded.

As a rule, I’ve always used the Dependency Driven approach, where most every task is linked explicitly. Simply put, the Dependency Driven approach is using dependencies (predecessors) to sequence work to be done by a given resource. The common myth is that resource leveling never works in the real world, breaks project plans, and should be avoided like the plague. Much of this myth is due to lack of understanding of how to utilize the tool at hand (our friend, MS-Project). To start with, the Dependency Driven approach proscribes an overuse of dependencies which masks the actual work-related dependencies. In all fairness, not every project is a candidate for resource leveling, in fact for resource leveling to be effective, the underlying plan already constructed should already have the following characteristics:
- Minimum number of tasks with more than one resource assigned
- Removal of any constraints that are not absolutely necessary

What’s common wisdom for Resource Leveling?
- Never click automatic resource Leveling
- Always save your plan before even thinking of resource leveling
- It messes up your plan, throwing tasks in random order, and blowing dates
- It’s hopelessly broken and unworkable



Well, let’s explore Resource Leveling and utilization, and you may find my analysis below can help save huge time and effort in using Level Resources to optimize your plans. Let’s start with how MS-Project reports overutilization.

Crying wolf: how MS-Project over-reports resource overallocation
Has MS-Project ever identified resources as overallocated while you believe the plan is correct? Well, the calculation engine in MS-Project is somewhat primitive and can over-report utilization. As an example, I created a plan with three one hour tasks that occur on the same day. MS-Project immediately flags me as overallocated, even though I only work three hours in an eight hour day. That’s because MS-Project schedules all tasks to start at the same time of the day. Let’s have a look at a simple 3-task plan:


Here’s the resource sheet, note I am “red” meaning overallocated:


Here’s the resource graph:


Why am I overallocated? Let’s take a closer look at the default resource scheduling by MS-Project by looking at the Resource Usage view, looking in 15 minute increments at how these tasks get scheduled by default:
Clearly, the tasks are all scheduled to start at 8am.
Interestingly, if I use “Level Resource” it will not fix the problem, unless I set a high level of granularity, by telling the leveling engine to fix problems minute-by-minute or hour-by-hour. Note the default “day-by-day” leveling will not fix the problem. Here’s how it looks after I level on an hour-by-hour basis:
Note the red is gone, indicating there is no more overallocation. The actual calculation engine will report overallocation if a resource has to work for more than 60 seconds in any one minute of a project. This is because the scheduling and calculations are a bit simplistic, combined with MS-Project natively using one-minute increments for calculations. There are three basic approaches to resolving this:
  • Tools/Resource Leveling
    You will need to set fine granularity (hour-by-hour in this case) for leveling.
  • Edit Task Usage
    View/Task Usage and change the minor time scale to hours, then manually edit the working hours so there is no overlap.
  • Adjust units
    Set the Resource Units of the two tasks to be 50% on both so when there is an overlap the maximum is 100%. Make this edit by selecting Window/Split and change the units in the Task Entry form that appears in the lower window. The man-hours of work can be edited here as well to more realistic amounts if appropriate.

Resource Leveling –approaches
There are really three approaches to creating a project plan that has resource appropriately allocated. Your choice strongly affects the effort you will put into the plan up front, and how much effort needed as you manage the plan. MS-Project indeed can be frustrating or rewarding, depending on how it is used and how well you understand its behavior.



How to approach leveling
Prior to leveling (or more likely, to tune the way Level Resources works), you may want to set the task priorities. Just add a column for “Priority”. Priority is an indication of a task's importance and availability for leveling. Tasks with the lowest priority are delayed, and those with the highest priority tend to be scheduled earlier. The priority value that you enter is a subjective value between 1 and 1000, which enables you to specify the amount of control you have over the leveling process. For example, if you don't want Project to level a particular task, set its priority level to 1000. By default, priority values are set at 500.
In most cases, consider leveling overallocated resources only after you have entered all information about task scheduling and resource. In some cases, you might want to level resources, examine the outcome, and then adjust other task and assignment information.
When entering schedule information for your tasks, keep the following in mind to make sure MS-Project schedules your project accurately, and to help prevent unnecessary resource overallocations. Here are some tips:
  • Task Dependencies
    Use task dependencies to reflect a necessary sequence of events. Don’t overuse them, as each additional dependency can result in resource underutilization or completion date slippage.
  • Constraints
    Avoid inflexible constraints, except where necessary. Inflexible constraints are those tying a task to a date. The inflexible constraints are Must Finish On and Must Start On date constraints. You can specify that a task must start on or finish no later than a particular date. Note such constraints limit the adjustments that Project can make when determining which tasks to adjust when leveling resources.
  • Priorities
    These values act as “hints” to the resource leveling engine that affect the schedule sequencing of tasks. Tasks with the lowest priority are delayed or split first. Use a task priority of 1000 (meaning do not level this task) only when a task absolutely cannot be delayed. 500 is the default value.

Some recommendations on settings when leveling resources:

  • Never use “Automatic”
    Manual is best, so you don’t get blindsided by automatic changes. Plus automatic changes can be slow in a large plan.
  • Start with “Day by Day” for overallocation
    If you need to eliminate all overallocations, go with minute-by-minute.
  • Enable “Clear leveling values before leveling”
    Otherwise you will accumulate delays and elongate the plan
  • Level the entire project or from the current day forward
    You may wish to delete leveling for a range of tasks, which you can easily do afterwards.
  • Use “Priority, Standard” leveling order
    This allows you to use “Priority” as a hint to task sequencing. See the section below for more information on what is going on in the engine for this setting.
  • Enable “Leveling Can Adjust Individual Assignments On A Task” only if you find that resources are insufficiently allocated
    This setting allows multiple resource assigned to a task get their day-to-day assignments tuned to optimize completion of the task. This setting will have no effect unless multiple resource are assigned to one task.
  • Disable “level only within available slack” if you want some meaningful resource leveling
    Note this enables the engine to adjust your completion date.
  • Disable “Leveling Can Create Splits In Remaining Work” unless you are ok with work getting stopped and started at random to create full utilization
    Bear in mind there are stresses and task switching overhead for human beings that get yanked between tasks seemingly at random.



What “Leveling Order” does
There are three values for leveling order, which drive how the underlying leveling engine performs. Here’s what’s going on for each of these values:


My recommendation? Only use “ID Only”, I use a minimum amount of adjustments to “Priority” values, but then use “Priority Standard” so my priority settings then get taken into account.
When MS-Project levels resources, it only delays tasks (or splits tasks if you have allowed that). It does not:
  • Reassign tasks
    It is up to the PM to make the resource assignments.
  • Optimize a resource's allocation
    Because leveling does not move tasks earlier or reassign units, a resource flagged as overallocated might become underallocated as a result of leveling. If you have multiple resources assigned to each task at varying units, you are likely to have underallocated resources, as MS-Project will level to ensure there is no overallocation.
  • Reassign units
    For example, if I am assigned to work on two tasks that are both scheduled at the same time, leveling won't change my units so that I works on both tasks at 50 percent.
    When you're ready to have Project level resources, on the Tools menu, click Level Resources. To accept all the defaults, click Level Now.

After Project finishes leveling an overallocated resource, certain tasks assigned to that resource are split or delayed. The split or delayed tasks are then scheduled for when the resource has time to work on them. You can see the results of leveling in the Leveling Gantt view, which graphically shows preleveled values compared with postleveled values, including newly added task delays and splits.

The effect of delay on leveling and scheduling
Leveling delay lets a project manager precisely manage the start and end of every task without adding dependencies and without fixing the start date. Leveling delay is a hidden field that exists at two places: the task level and the resource-assignment level:
  • Task Level
    Task level is the easiest to apply, but the hardest to manage. Simply add the “Leveling Delay” column to the Gantt chart view, or use the “Leveling Gantt” view. Enter the number of hours or days delay, and the entire task shifts out in time. Note the delay must be in elapsed days or hours. Elapsed days do not honor holidays and non-work time. A project manager must often adjust these delays manually as task start and end dates shift.
  • Resource Level
    Resource-assignment leveling delay can be entered for each resource assigned to a task. It can be entered in work-hours or work-days. It provides flexibility to level out any imaginable work. If two people are assigned to a task, but only one is over allocated by 4 hrs, delay the over allocated resource by 4 hrs. The other resource’s start date is unaffected. To delay the entire task, though, every resource assigned to the task must be delayed. Make maintenance simpler; allocate one and only one resource per task.

Leveling delay is measured from the dependencies of the task. For instance, if task #4 and task #5 are both dependent upon task #3, ending on Monday, they will both begin on Tuesday with no delay. Putting a two-day delay on #5 will make task #4 start on Tuesday and #5 start on Thursday. If the end-date of #3 moves to Tuesday, they will both roll forward to Wednesday and Friday start-dates.

By combining units (% allocation) and leveling delays, a project manager can accurately represent complex work. People can work multiple tasks simultaneously, participate in multiple chains of dependent tasks, and still work all available work hours, and not one hour more.
“Level Resources” adds a “Leveling Delay” that is visible as a column (field) in MS-Project. This is shown in Elapsed Days (edays) that represent calendar (elapsed) time, and not work time. You can use the short hand “ed” (such as 5ed) to represent 5 elapsed days, and you can edit these fields, either to manually remove or adjust the output of leveling resources.

Some observations looking at the Leveling Gantt view, looking at “Leveling Delay”:
1. Task ID 2 and 3 were slipped within the same day to avoid overallocation
2. Task ID 7 has a predecessor Task ID 4, so the later has no green trailing line
3. Since priority was not set, the tasks are generally in ID order

Task level delays described above move an entire task with all assignments. A more fine-grained adjustment is to insert delays at the resource (assignment) level within a task.

In summary, Level Resources requires some understanding to use effectively, but it can save you significant effort in tuning a plan to effectively utilize resources to achieve your project objectives.

Sunday, January 25, 2009

Data Configuration Management

Over a period of decades IT management has developed a mature discipline around Software Configuration Management (SCM). The growing challenge in IT is that data has assumed an increasingly common role of controlling and affecting software logic, yet often sidesteps the rigorous change management controls that are in place for software.

What is Software Configuration Management?
Simply put, this is process of tracking and controlling changes in software. Configuration management practices include revision control and the establishment of baselines. From a management perspective, this allows for control over what changes are made, what are the differences between software versions, and how to roll them back. A robust set of tools for managing changes, versioning software components, comparing source code, and automating builds exists.


The power of data
Users have always been able to change data via applications. However there are certain categories of data I would argue need to be managed with the controls already devised for software:

  • Business Logic
    Configuring systems to take a different courses of action based on data settings. What was once hard-coded can be exposed to end user configuration.
  • Rules based systems
    These are systems are explicitly designed to formalize rules, generally eschewing code for data. It is an easy step to enable end users to edit the rules.
  • Templates
    These affect display, formats, transformations, and any number of system inputs/outputs. These include XSLT.
  • Dynamic data
    It is considered extremely chaotic to have code actually change code real-time. This rarely occurs on-purpose outside academia. Code is typically considered static and unchanging. However Configuration Data is easily changed real-time.


The heightened challenge of data
Data typically is not retained with the same rigor as source code:

  • Data often exists as a point in time
    When a data element changes, the history is typically not retained. Databases commonly do not allow for native rollbacks in the way source code management systems allow.
  • Data does not allow for comments
    Source code (based on the underlying language) allows for in-line comments. Data almost by definition does not support comments.
  • Data versioning today requires point-solutions
    If you want a history of data changes, it typically requires the addition of a dimension for each element. This is custom coding, and in the pressures of today’s development and the need for performance, the data cannot easily be versioned.
  • Data is undated
    Examine a database; can you tell when a given field within one record was changed? And by whom?
  • Data does not execute serially
    What makes source code relatively easy to walk through is the sequential and serial nature of executing instructions. This is a part of the Von Neumann Architecture; this architecture first described by the visionary computer scientist John Von Neumann first segregated data from code, and described a control unit (today a CPU) for serially executing code. This leads to the next point.
  • Data has no meaning without context
    The interpretation of data is left to the reader. The data by itself can be asserted to be meaningless without the context of its definition, purpose, restrictions, or values. Source Code by definition can be interpreted through the lens of the compiler or interpreter for which it is explicitly designed.

There are several gradual trends that have lead to data driven systems taking on aspects of source code:

  1. Cloud Computing / SAAS
    Software As A Service may just mean hosted application software, but if you consider when there are economies of scale via multi-tenancy, you have multiple instances of the application, serving many customers. Each customer may have the software configured slightly differently, and herein lays the rub. Customization is often done via data configuration settings, and those setting need to be treated with the same care as source code.
  2. Need for Speed
    Business desire to move quickly conflicts with configuration, change and release management which imposes control disciplines. Business users gradually realize they can bypass the iron grip of these processes by changing configuration information.
  3. Template standardization
    As software is commoditized, templates become standardized; both within applications and within the industry. This provides a rich and powerful capability for transforming software behavior without programming or touching source code.


The crux of the issue
There has long been a focus on Intellectual Property (IP), data protection, ensuring against data loss, and data availability, but what about managing changes to data that can impact the organization through how systems behave? Are changes tested and tracked, approved, and deployed on a schedule, with the ability to undo? Is there a systemic way to associate system behavioral changes with control data changes?

Network and security specialists have long grappled with this issue, as firewall rules and network routing are commonly data that has the capacity for significant impact on the organization. Access control related to financial data has taken on stricter requirements as a results of SOX. However Configuration Data remains the 900lb elephant in the room many prefer to ignore.

Best practices
1. Limit users that can change “Configuration Data”
2. Ensure there is a log of Configuration Data changes
3. Ban Configuration Data that is changed dynamically by an application.
4. Demand the ability to roll back from any change, and retain the control of roll-back within IT
5. Segregate roles of those that change data, and those that approve the changes
6. Have data changes made first in a test environment
7. Consider using QA to vet changes
8. Define Configuration Data clearly, and put Configuration Data changes through existing change control processes
9. Establish policies that reflect the above, publish policies and track that policies are accepted annually by all affected staff.

Thoughts for the future
A checksum can be used to verify that configuration data has not changed. Systems can be designed to report changes to configuration data, or even be designed to refuse to start, if it is preferable not to activate a system than to run a system with suspect data.

If Configuration Data is effectively indistinguishable from code in its effect, then why should Configuration Data be subject to less stringent controls than code? In summary, Configuration Data needs to be managed with the same level of control as source code. To be effective, on e needs to first define crisply what Configuration Data is, so as not to swamp an organization with unnecessary controls.

Sunday, January 11, 2009

Organizational culture: The Process Dimension

To be effective within an organization, the IT manager needs to understand the organization, deeply. Experience (and grey hairs) give an IT manager the tools for understanding and synthesizing an organization’s culture both to function effectively within it, and to chart out and implement any changes that are both needed and feasible.


What is “Organizational Culture”
Organizational culture is the personality of the organization, and as in people, personality is deeply ingrained and not at all easily changed. Culture is the aggregation of behaviors, values, beliefs, norms and tangible signs of organization staff behaviors. Certainly the industry plays a part, as well as staff and location diversity, as well as the organization’s stories. A simple look at the “Help, About Us” on a company’s website can provide telling insights. While there are many ways to examine the culture of an organization for the moment let’s consider the cultural compatibility with a process orientation.


The reluctant organization
There exists a world-view that process is simply bureaucratic overhead that gets in the way of actual work. Amazingly in today’s day and age there exists organizations and pockets of IT that have inculcated this belief system. Symptoms include endless fire-drills and wringing the last bits of overtime out of beleaguered staffers under increasing stress and pressure. Introduction of process from the bottom-up is destined to fail. Such organizations find they cannot scale up, suffer high-turnover, and inconsistent quality. 3,000 years ago King Solomon writing under a pen-name once said “What is truly crooked cannot be straightened”, which is as applicable as ever. As long as this weltanschauung (world-view) radiates from the top of the organization, the organization will not change. As the wry Russian expression states “The fish rots from the head”.


Auto-industry
Let’s briefly examine the evolution of the post-war automobile industry. The concepts of creating a science out of production by instituting systematic process with solid statistical foundations were being charted by the two great men W. Edwards Deming and Joseph Duran. U.S. companies rejected the concepts, but Japanese manufacturers adopted the principles, leading Japan to reverse their reputation for shoddy goods, and ultimately overshadow the U.S. car industry by adapting and implementing TQM (Total Quality Management). This exemplifies the power of rigorous process, as well as the feasibility of organizational cultural change, when the will to change and top-level support exists.


How to read an Organization’s culture
Before taking on a role within a company where you will drive process (such project management, program management, audit, PMO), first look for the clues of acceptance. Here’s what to look for and some probing questions to ask:


The Hercules Mythology
When an organization collects and nurtures irreplaceable people, it can be indicative of poor cross-training, lack of delegation, and possibly staff that become unmanageable as they start to develop a sense of self-importance that be caustic to staff morale. One symptom of this are myths around key individuals that typically include herculean efforts. Often people eager to achieve this kind of stardom allow problems to fester until a crisis develop, so they can pull an “all nighter” to “save the day”. Staff can become addicted to the adrenaline rush, leading to avoidable crises.


Organizational hierarchy
A strict hierarchy, with command/control, and no cross-silo collaboration can indicate duplication within each department, and lack of communication that creates avoidable waste. When taken to an extreme, this organization structure often is associated with the philosophy that “management knows best” and often is exhibited with overworked managers, and staffers with insufficient information to be effective.


Beliefs around past failures
Why has the organization failed in the past? If the answers are around having the wrong people, then the belief system is that hiring the right people will solve the problem. Such organizations may have to overpay to hire overqualified individuals, who through brute force achieve goals, yet do so in an unpredictable and irreproducible fashion.

Scapegoats
High turnover is another symptom that a core organizational belief is that the organization would achieve greatness if only they had the right people. Consistently high turnover over time raises the question of whether the problem is the people, or the organization. A similar symptom is consistently high turnover for one job or role. This can indicate that a role is set up for failure and cannot possibly succeed. One wonders if this adage can be applied to organizations as well as people “one definition of insanity is doing the same activity repeatedly, yet expecting a different outcome”.

Overtime
Is overtime common? Overtime may be used to compensate for poor estimation, overoptimistic commitments, poor and rapidly changing prioritization. Overtime is common in IT, but one needs to be aware that one can work 10%, 20% even 30% overtime, but cannot increase productivity by 200% or more. Plus overtime is self-defeating, as staff productivity will degrade due to impacted morale and sleep deprivation.

Lessons learned
Does the organization engage in any kind of post-mortem or lessons learned exercise after a project? No matter how informal or inconsistently applied, such a practice within an organization gives hope that in its desire to learn, it acknowledges there are indeed lessons to learn, and ways to work better, opening the possibility of gradually introducing a process orientation.

Summary
In summary, a process orientation in a corporate culture is not just a recipe for quality, consistency and scalability, but is a foundation for organization and personal success, and may well be a core competitive competency, and barrier to competition and market entry for less enlightened organizations.