However there are common themes that reoccur in many labs including:
- Significant Short-term Volatility in Lab Workloads
(This is by far the biggest productivity improvement opportunity in most labs)
- Unbalanced and Volatile Analyst Workloads
- Queues and high volumes of WIP
- Significant effort in controlling, tracking, and prioritising samples
- Ineffective 'fast track' systems
- Lack of defined sequences, standard run sizes or standard work
- Poor or no performance management / short interval control
- Lots of N.V.A. (Non Value Adding activity)
- Long lead times and /or low productivity
- Investment in software (LIMS will save us)
In most labs the incoming workload is very volatile in the short interval, with significant peaks and troughs. More often than not, this volatility is imported directly into the testing process. This causes low productivity (during troughs) and poor lead time performance (during peaks). Very often the capacity of the lab is not well understood and there is no mechanism to level the workload. Many labs operate in a repeating cycle of ‘run late’ and ‘catch up’, essentially lurching from crisis to crisis.
Levelling a volatile workload is perhaps the single most valuable thing that can be done when leaning a lab – it generates significant productivity improvements and changes how a lab operates and performs. It also significantly reduces ‘fire fighting’ and related stress. Volatile workloads can often be levelled via 'levelling queues' followed by 'Rhythm Wheels' or 'Test Trains'.
Incoming workload volatility is often compounded by the method of allocation of analyst resources.
- Analysts dedicated by test or sample type: In some labs analysts are dedicated to particular tests or sample types. If the volume of samples for their particular test or sample type is volatile, then their daily or weekly workload will also be volatile – a clear productivity loss. In addition, individual analysts may be busy while others are not - another clear productivity loss.
- 'Weekly Bucket' Scheduling: In many other labs, supervisors or group leaders plan and schedule the following weeks work for each individual analyst. This can reduce the imbalance between analysts. However because scheduling is usually on the basis of 'available work through available people', the workload for analysts is volatile from week to week – again a clear productivity loss.
Also because 'Weekly bucket Scheduling' ties up resources for the week ahead, new samples arriving during the week have to wait until the following week to be tested. Samples which arrive earlier in the week wait longer than those that arrive later in the week, which results in variable lead times.
Labs are often too focused on individual test run efficiency. We often find queues in front of tests in which individual samples wait until enough similar samples arrive to constitute an efficient test run. This approach causes long and variable lead times and contrary to popular belief, often does not result in higher overall lab productivity. Samples should be tested at the levelled demand rate in as few test runs as possible.
The long lead times and high volumes of partially tested samples (WIP) common in many labs demands significant effort to manage and control the samples. It is not unusual for Supervisors and Team Leads to spend a significant portion of their time sorting and organising workloads, locating individual samples and tracking the progress of work through the lab. Developing 'flow' of samples via defined sequences of testing will significantly reduce the volume of WIP and the effort required to manage it.
'Fast Track' systems are often developed in an effort to deal with urgent samples but these rarely work. In most cases, the proportion of samples designated as priority becomes so large that 'Fast Tracking' quickly becomes unworkable. A much better approach is to re-engineer the process to improve the velocity for every batch. - It can be done.
In many Labs samples are simply tested in order of arrival to the lab. This allows the daily workload and mix of samples to vary. In addition, individual analysts are often allowed to carry out the required tests in any order they like and therefore to change the daily combination of tasks. The number of samples in individual test runs is also allowed to vary and this lack of 'defined sequences', 'standard run sizes' and 'standard work results' in variable lead-time and productivity performance.
Some people are good time and task managers and will naturally combine tasks in an efficient manner but many are not. A standard work approach will discover the best method and ensure that it is followed by all. Standard work is a key lean principle which aims to combine tasks in order to use analysts' time well. It is enabled by defined sequences of operation and levelled workloads. To make these sequences simple to understand and operate, BSM often incorporates them into Hijunka devices such as 'Trains and Rhythm Wheels'.
In many laboratories, only lead-time and the investigations rate are measured. Lab productivity is often ignored (because it is perceived as difficult to understand and measure). Overall lab performance and performance trends are often not communicated well to the individual analysts.
'Short Interval control' is also weak in many labs. Too often managers, supervisors or group leaders only find out that there are problems with individual samples or tests when the results are past due. Operational performance (versus a pre-defined sequence of testing) should be reviewed at least daily as part of a short team ‘huddle’ held in front of a performance white board.
Most labs have significant amounts of Non Value Add Activity which should be addressed in a lean lab project. Some very common forms of NVA include:
- Unnecessary or Excessive Testing
- e.g. Information only tests not required by the M.A.
- Excessive Planning and Scheduling Effort.
- Excessive or unnecessary Test Documentation
- Rewriting the method / duplicating or transcribing log book or LIMS data.
- Excessive Documentation Review and Approvals effort
- Too many and too slow.
- Poor Documentation 'Right First Time' (RFT) and excessive error correction effort
- Excessive Investigation effort (via slow and unwieldy investigations processes)
The workload volatility and lack of standard work common in most labs results in long and variable lead times. If you simply throw resources at the problem, lead times can be maintained but productivity will be poor. It is possible however to level workloads via the introduction of flow and elimination of waste. This can then be used to improve productivity and/or lead-time.
Software can be a good way to reduce non-value adding effort. However, it will not normally in itself significantly improve lead-times or lab productivity. The underlying process should be re-engineered before parts of it are automated via LIMS or other software.