Wednesday, March 23, 2011

It is a mistake to confuse methodology with objectivity

People use a variety of evaluative processes in business to ensure objectivity. But the dangerous fact is that these evaluative processes, these methodologies, even the most rigorous, are woefully insufficient to guarantee objectivity.

Why? There are three basic dangers in relying on method to ensure objectivity.

1. The normative dangers

The overarching danger attaching to evaluative models is the belief that we can use methodology as a legitimating device, even if our evaluative methodology has flaws. Furthermore, if any kind of injustice has resulted from the application of a methodology how can it even be called an injustice, if the "test" for justice is the methodology?

2. The epistemic dangers

I use staffing as a paradigmatic example because of its situation, which cross-cuts the human and the operational domains of management.

To ensure transparency and effectiveness, it is assumed that a method which permits the definition, measurement and testing of people's "competencies" will do the job.  First, who guarantees that the definition of competencies is appropriate? What sanctions the applicable criterion?  It depends on the motive - are the criteria sanctioned in terms of the business needs that the hire will meet or the efficiencies that the hiring process will meet? (If you have to think more than 5 seconds on that one, you're in trouble.)

Second, competency is not a feature of potential hires like the number of doors on a car. People are living, learning beings with potential and we can't screen for traits we haven't identified but might have found very valuable.

Competencies mean little without relationship to motivation to develop them further and opportunities to use them.

We have a collective Western habit of freeze-framing states and confusing the ability to label the frame with the ability to recognize the meaningful stuff in phenomena. Nothing living stands still, including the level of the abilities of your potential staff.

3. The ontological dangers

Empiricism is at pains to distinguish itself from teleology - the identification of ends. The underlying belief at stake is the idea that a grand purpose or design to everything is an anthropomorphic projection. This may or may not have clear import in certain areas of scientific research, but to carry that principle of wariness over into the types of planning whose purpose is to improve quality of life is ludicrous.

For example, in staffing (the most clear examples come from that domain, as I explained earlier), operationally defining merit criteria in terms of a screening methodology puts the methodological cart before the ontological horse. A competency is not an exhausively definable capacity, but a set of potentials that evolves over time. Efforts should be made to ensure fairness and objectivity, and to ensure thereby that the right person is selected for the job (i.e., the "telos" or end is that the essential criteria are connected to the needs of the job and the person's fitness for the role overall is assessed according to those ends). This connection is lost when the specific criteria become disconnected and itemized, and especially when they become too narrowly defined due to a different objective - the objective to manage the "empirical" screening process efficiently. Of course, divorced from its ends, efficiency has no meaning. This is neither fair nor effective.

I may have said this before, but it is clear that no matter how much “empirical evidence” we have, in the end it is our capacity for meaningful valuation that allows us to identify worthwhile goals and our capacity for reasoning that allows us to collect the relevant evidence in support of those and assess it. These two capacities are not separate, but deeply connected in us and contribute equally to the quality of our lives. My examples tend to be drawn from staffing because they are the most straightforward, but the underlying principles could apply to many areas public admin.

Going against the grain of the contemporary Zeitgeist, to ensure objectivity, and by extension, fairness,  in people or program evaluation, we should be sure to use our capacities for meaningful valuation and reason to the best of our abilities.

Wednesday, March 9, 2011

Change Discourse: An Aside on Planning and Evaluation


In large organizations, in the accepted practices of planning and evaluation, we are hampered by our inability to acknowledge unanticipated benefits, i.e., the creativity of what is proposed. We seem to be unable to value options that we haven't anticipated in advance or even to accommodate valuations of unanticipated options. What would count as legitimation in these instances?  The unanticipated benefit cannot be considered relevant, based on pre-established criteria.

Why do we deprive ourselves of anything more than our presupposed potential?  The standard (ontological and meta-ethical) notions of objectivity and fairness in evaluation are in need of a major rethink, especially in a complex, rapidly changing world where future potentiality becomes at least as important as present actualities. Adaptability and responsiveness are more important than adhering to artificial and outdated notions of objectivity, which for at least two millennia, have been more about projecting our conceptual filing cabinets on events than on encouraging openness to new an novel ideas and approaches. To add irony to insult, in today's variant, to mimic the successful model of reductionist science, it is assumed that the ability to assign a number to some evaluative criterion makes that criterion objective. 

The difference between living and mechanical linear/reductionist notions of objectivity needs to be emphasized. In the linear/reductionist model, the relevant determinants of an event  are isolated and identified in a rigorous way, such that they can be arranged and used in planning experiments and building machines. As successful as this method has been, however, according to scientists such as Eugene Wigner, the approach legitimately applies to only a small set of relatively easily manipulable things (which is why basic physics is about necessary laws and anything more complex is statistical at best at the lowest level of granularity.)  The attempt to apply it to a large open-ended organization occasions the kinds of issues described by Burt Perrin (Paragraph 103). 

Stuart Kauffman shows that a living system is one where the boundary conditions are intrinsic to the phenomena under investigation, which are not "placed by hand" as they are in a more mechanistic model. He further argues(cf. pp. 131-143) that there is "no way to pick out the relevant collective variables that will play a causal role in the further evolution" of living systems.  (p.140-1) Nevertheless, we put our methodological carts before our ontological horses, and worse, we justify our aims based on our methods, rather than the reverse. This is not only reductionist. It reduces us.


Our model of measurement needs to change from a mechanical one to one that respects that we work in living systems. We need to use something a bit less pedantic than a pre-established list of specific criteria for evaluation and learn how to acknowledge context, the context being our actual overall aims in relation to what is being evaluated both now, and as they change, with learning, over time.  We need to look at a higher level of granularity if we're really serious about innovation or getting beyond a nihilistic means-focused outlook (a hamster wheel).  This will require a sea change at every juncture. Responsiveness and resiliency will be key.