PurposeIn a previous paper we emphasized the advantages of multicriteria methodologies to evaluate business performance. In this paper we want to highlight the metachoice problem that always arises in a benchmark multicriteria analysis that can be synthesized as follows: “how to choose an algorithm to choose?”ApproachIn order to perform a benchmark analysis, a set of criteria must be chosen. In the Balanced Scorecard approach, for example, Key Performance Indicators (KPIs) are grouped in four different perspectives: Financial, Customer, Internal Processes and Learning & Growth. In this paper, we focus on multicriteria benchmark analysis applied to KPIs of the Financial perspective. We consider a set of criteria used in financial statement analysis based on balance sheet, income statement and cash flow statement. A case study is described.FindingsThe main findings of the paper are: (i) when the evaluation of a firm is based on different genuine criteria, a metachoice problem arises: multicriteria ranking algorithms cannot be selected using a multi-criteria algorithm; (ii) the choice of an algorithm ultimately depends on the subjective preference of the policy maker; (iii) our metachoice solution to the benchmarking problem is in accordance with Simon’s satisfacing solution, describing a non-maximizing performance measurement methodology .Practical implicationsThe paper provides several practical implications in all cases in which a ranking has to be assigned to a group of firms based on financial performances. More in general the problem is very relevant when a ranking has to be carried out with respect to a set of projects, a set of strategies, a set of organizational units, etc.Originality/valueThe adoption of a set of criteria is certainly an advantage to avoid uni-criterial myopic evaluation. However, this also creates some methodological problems. We demonstrate the “relativity” (subjectivity) of results of the evaluation process when there are many evaluation criteria, as in a benchmark context. This is a metachoice problem that cannot be solved by using another multicriteria algorithm.
Metachoice for Benchmarking: a case study
Laise D;IAZZOLINO, Gianpaolo
2015-01-01
Abstract
PurposeIn a previous paper we emphasized the advantages of multicriteria methodologies to evaluate business performance. In this paper we want to highlight the metachoice problem that always arises in a benchmark multicriteria analysis that can be synthesized as follows: “how to choose an algorithm to choose?”ApproachIn order to perform a benchmark analysis, a set of criteria must be chosen. In the Balanced Scorecard approach, for example, Key Performance Indicators (KPIs) are grouped in four different perspectives: Financial, Customer, Internal Processes and Learning & Growth. In this paper, we focus on multicriteria benchmark analysis applied to KPIs of the Financial perspective. We consider a set of criteria used in financial statement analysis based on balance sheet, income statement and cash flow statement. A case study is described.FindingsThe main findings of the paper are: (i) when the evaluation of a firm is based on different genuine criteria, a metachoice problem arises: multicriteria ranking algorithms cannot be selected using a multi-criteria algorithm; (ii) the choice of an algorithm ultimately depends on the subjective preference of the policy maker; (iii) our metachoice solution to the benchmarking problem is in accordance with Simon’s satisfacing solution, describing a non-maximizing performance measurement methodology .Practical implicationsThe paper provides several practical implications in all cases in which a ranking has to be assigned to a group of firms based on financial performances. More in general the problem is very relevant when a ranking has to be carried out with respect to a set of projects, a set of strategies, a set of organizational units, etc.Originality/valueThe adoption of a set of criteria is certainly an advantage to avoid uni-criterial myopic evaluation. However, this also creates some methodological problems. We demonstrate the “relativity” (subjectivity) of results of the evaluation process when there are many evaluation criteria, as in a benchmark context. This is a metachoice problem that cannot be solved by using another multicriteria algorithm.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.