Potentially all pairwise rankings of all possible alternatives (PAPRIKA) is a method for multicriteria decision making (MCDM) or conjoint analysis based on decisionmakers’ preferences as expressed using pairwise rankings of alternatives.^{[1]}^{[2]}
The PAPRIKA method – implemented via decisionmaking software known as 1000Minds – is used to calculate point values (or ‘weights’) on the criteria or attributes for decision problems involving ranking, prioritising or choosing between alternatives. Point values represent the relative importance of the criteria or attributes to decisionmakers.
As well as representing decisionmakers’ preferences, the point values are used to rank alternatives – enabling decisionmakers to prioritise or choose between them (perhaps subject to a budget constraint). Examples of applications of the PAPRIKA method appear in the next section.
Applications
Applications of the PAPRIKA method in the area of health care decisionmaking include:
Applications in other areas include:
Additive multiattribute value models (or ‘points systems’)
The PAPRIKA method specifically applies to additive multiattribute value models with performance categories^{[30]} – also known as ‘points’, ‘scoring’, ‘pointcount’ or ‘linear’ systems or models.
As the name implies, additive multiattribute value models with performance categories – hereinafter referred to simply as ‘value models’ – consist of multiple criteria (or ‘attributes’), with two or more performance categories (or ‘levels’) within each criterion, that are combined additively. Each category is worth a certain number of points that is intended to reflect both the relative importance (‘weight’) of the criterion and its degree of achievement. For each alternative being considered, the point values are summed across the criteria to get a total score (thus these are additive value models), by which the alternatives are prioritised or ranked (or otherwise classified) relative to each other.
Thus, a value model (or ‘points system’) is simply a schedule of criteria and point values (for an example, see Table 1 in the subsection below) for the decision problem at hand. This representation is equivalent to the more traditional approach involving normalised criterion weights and ‘singlecriterion value functions’ to represent the relative importance of the criteria and to combine values overall (see weighted sum model). The unweighted points system representation is easier to use and helps inform the explanation of the PAPRIKA method below.
An example of an unweighted points system
One example application for an unweighted points system is ranking candidates applying for a job.
Imagine that ‘Tom’, ‘Dick’ and ‘Harry’ are three candidates and that they are to be ranked with respect to their overall suitability for the job using the value model in Table 1 below. Suppose that after being assessed they are scored on the five criteria (see Table 1) like this:

Tom’s education is excellent, he has > 5 years of experience, but his references, social skills and enthusiasm are all poor.

Dick’s education is poor, he has 2  5 years of experience, and his references, social skills and enthusiasm are all good.

Harry’s education is good, he has < 2 years of experience, and his references, social skills and enthusiasm are all good.
Table 1: Example of a value model (points system) for ranking job candidates
Criterion

Category

Points

Education

poor

0


good

8


very good

20


excellent

40

Experience

< 2 years

0


2 – 5 years

3


> 5 years

10

References

poor

0


good

27

Social skills

poor

0


good

10

Enthusiasm

poor

0


good

13

Summing the point values in Table 1 corresponding to the descriptions for Tom, Dick and Harry gives their total scores:

Tom’s total score = 40 + 10 + 0 + 0 + 0 = 50 points

Dick’s total score = 0 + 3 + 27 + 10 + 13 = 53 points

Harry’s total score = 8 + 0 + 27 + 10 + 13 = 58 points
Clearly, Harry has the highest total score. Therefore, according to the value model (and how the candidates were assessed) he is the best of the three candidates. (Though, clearly, relative to other candidates who could potentially have applied for the job, Harry is not as good as the best hypotheticallypossible candidate – who would score a ‘perfect’ 40 + 10 + 27 + 10 + 13 = 100 points.)
In general terms, having specified the criteria and categories for a given value model, the challenge is to derive point values that accurately reflect the relative importance of the criteria and categories to the decisionmaker. Deriving valid and reliable point values is arguably the most difficult task when creating a value model. The PAPRIKA method does this based on decisionmakers’ preferences as expressed using pairwise rankings of alternatives.
Overview of the PAPRIKA method
As mentioned at the start of the article, PAPRIKA is a (partial) acronym for ‘Potentially All Pairwise RanKings of all possible Alternatives’.
The PAPRIKA method pertains both to value models for ranking particular alternatives that are known to decisionmakers (e.g. as in the job candidates example above) and to models for ranking potentially all hypothetically possible alternatives in a pool that is changing over time (e.g. patients presenting for medical care). The following explanation is centred on this second type of application because it is more general.
PAPRIKA is based on the fundamental principle that an overall ranking of all possible alternatives representable by a given value model – i.e. all possible combinations of the categories on the criteria – is defined when all pairwise rankings of the alternatives visàvis each other are known (provided the rankings are consistent).
(As an analogy, suppose you wanted to rank all competitors at the next Olympic Games from the youngest to the oldest. If you knew how each person was pairwise ranked relative to everyone else with respect to their ages – i.e. for each possible pair of individuals, you identified who is the younger of the two individuals or that they’re the same age – then you could produce an overall ranking of competitors from the youngest to the oldest.)
However, depending on the number of criteria and categories, the number of pairwise rankings of all possible alternatives is potentially in the millions or even billions. Of course, though, many of these pairwise rankings are automatically resolved due to one alternative in the pair having a higher category for at least one criterion and none lower for the other criteria than for the other alternative – known as ‘dominated pairs’. But this still leaves potentially millions or billions of ‘undominated pairs’ – pairs of alternatives where one has a higher ranked category for at least one criterion and a lower ranked category for at least one other criterion than the other alternative, and hence a judgment is required for the alternatives to be pairwise ranked. With reference to the example of ranking job candidates in the previous section, an example of an undominated pair (of candidates) would be where one person in the pair is, say, highly educated but inexperienced whereas the other person is uneducated but highly experienced, and so a judgement is required to pairwise rank this pair.
If there are n possible alternatives, there are n(n−1)/2 pairwise rankings. For example, for a value model with eight criteria and four categories within each criterion, and hence 4^{8} = 65,536 possible alternatives, there are 65,536 x 65,535 / 2 = 2,147,450,880 pairwise rankings. Even after eliminating the 99,934,464 dominated pairs, there are still 2,047,516,416 undominated pairs to be ranked.^{[1]} Clearly, performing anywhere near this number of pairwise rankings is impossible without a special method.
PAPRIKA solves this problem by ensuring that the number of pairwise rankings that decisionmakers need to perform is kept to a minimum – only a small fraction of the potentially millions or billions of undominated pairs – so that the method is practicable. It does this by, for each undominated pair explicitly ranked by decisionmakers, identifying (and eliminating) all undominated pairs implicitly ranked as corollaries of this and other explicitly ranked pairs (via the transitivity property of additive value models, as illustrated in the simple demonstration later below).
The method begins with the decisionmaker pairwise ranking undominated pairs defined on just two criteria atatime (where, in effect, all other criteria’s categories are pairwise identical). Again with reference to the example of ranking job candidates, an example of such a pairwiseranking question is: “Who would you prefer to hire, someone whose education is poor but he or she has > 5 years of experience or another person whose education is excellent but he or she has < 2 years of experience, all else the same?” (see Figure 1).
Figure 1: Example of a pairwiseranking question (a screenshot from 1000Minds)
Each time the decisionmaker ranks a pair (such as the example above), all undominated pairs implicitly ranked as corollaries are identified and discarded. After having completed ranking undominated pairs defined on just two criteria atatime, this is followed, if the decisionmaker chooses to continue (she can stop at any time), by pairs with successively more criteria, until potentially all undominated pairs are ranked. Thus, Potentially All Pairwise RanKings of all possible Alternatives (hence the PAPRIKA acronym) are identified: as either dominated pairs (given) or undominated pairs explicitly ranked by the decisionmaker or implicitly ranked as corollaries. From the explicitly ranked pairs, point values are obtained via linear programming; although multiple solutions to the linear program are possible, the resulting point values all reproduce the same overall ranking of alternatives.
Simulations of PAPRIKA’s use reveal that if the decisionmaker stops after having ranked undominated pairs defined on just two criteria atatime, the resulting overall ranking of all possible alternatives is very highly correlated with the decisionmaker’s ‘true’ overall ranking obtained if all undominated pairs (involving more than two criteria) were ranked.^{[1]}
Therefore, for most practical purposes decisionmakers are unlikely to need to rank pairs defined on more than two criteria, thereby reducing the elicitation burden. For example, approximately 95 pairwise rankings are required for the value model with eight criteria and four categories each referred to above; 25 pairwise rankings for a model with five criteria and three categories each; and so on.^{[1]} The realworld applications of PAPRIKA referred to earlier suggest that decisionmakers are able to rank comfortably more than 50 and up to at least 100 pairs, and relatively quickly, and that this is sufficient for most applications.
Theoretical antecedents
The PAPRIKA method’s closest theoretical antecedent is Pairwise Tradeoff Analysis,^{[31]} a precursor to Adaptive Conjoint Analysis in marketing research.^{[32]} Like the PAPRIKA method, Pairwise Tradeoff Analysis is based on the idea that undominated pairs that are explicitly ranked by the decisionmaker can be used to implicitly rank other undominated pairs. Pairwise Tradeoff Analysis was abandoned in the late 1970s, however, because it lacked a method for systematically identifying implicitly ranked pairs.
The ZAPROS method (from Russian for ‘Closed Procedure Near References Situations’) was also proposed;^{[33]} however, with respect to pairwise ranking all undominated pairs defined on two criteria “it is not efficient to try to obtain full information”.^{[34]} As explained in the present article, the PAPRIKA method overcomes this efficiency problem.
A simple demonstration of the PAPRIKA method
The PAPRIKA method can be easily demonstrated via the simple example of determining the point values for a value model with just three criteria – denoted by ‘a’, ‘b’ and ‘c’ – and two categories within each criterion – ‘1’ and ‘2’, where 2 is the higher ranked category.^{[1]}
This value model’s six point values (two for each criterion) can be represented by the variables a1, a2, b1, b2, c1, c2 (a2 > a1, b2 > b1, c2 > c1), and the eight possible alternatives (2^{3} = 8) as ordered triples of the categories on the criteria (abc): 222, 221, 212, 122, 211, 121, 112, 111. These eight alternatives and their total score equations – derived by simply adding up the variables corresponding to the point values (which are as yet unknown: to be determined by the method being demonstrated here) – are listed in Table 2.
Undominated pairs are represented as ‘221 vs (versus) 212’ or, in terms of the total score equations, as ‘a2 + b2 + c1 vs a2 + b1 + c2’, etc. [Recall, as explained earlier, an ‘undominated pair’ is a pair of alternatives where one is characterised by a higher ranked category for at least one criterion and a lower ranked category for at least one other criterion than the other alternative, and hence a judgement is required for the alternatives to be pairwise ranked. Conversely, the alternatives in a ‘dominated pair’ (e.g. 121 vs 111 – corresponding to a1 + b2 + c1 vs a1 + b1 + c1) are inherently pairwise ranked due to one having a higher category for at least one criterion and none lower for the other criteria (and no matter what the point values are, given a2 > a1, b2 > b1 and c2 > c1, the pairwise ranking will always be the same).]
‘Scoring’ this model involves determining the values of the six point value variables (a1, a2, b1, b2, c1, c2) so that the decisionmaker’s preferred ranking of the eight alternatives is realised.
For many readers, this simple value model can perhaps be made more concrete by considering an example to which most people can probably relate: a model for ranking job candidates consisting of the three criteria (for example) (a) education, (b) experience, and (c) references, each with two ‘performance’ categories, (1) poor or (2) good. (This is a simplified version of the illustrative value model in Table 1 earlier in the article.)
Accordingly, each of this model’s eight possible alternatives can be thought of as being a ‘type’ (or profile) of candidate who might ever, hypothetically, apply. For example, ‘222’ denotes a candidate who is good on all three criteria; ‘221’ is a candidate who is good on education and experience but poor on references; ‘212’ a third who is good on education, poor on experience, and good on references; etc.
Finally, with respect to undominated pairs, 221 vs 212, for example, represents candidate 221 who has good experience and poor references whereas 212 has the opposite characteristics (and they both have good education). Thus, which is the better candidate ultimately depends on the decisionmaker’s preferences with respect to the relative importance of experience visàvis references.
Table 2: The eight possible alternatives and their totalscore equations
Alternative

Totalscore equation

222

a2 + b2 + c2

221

a2 + b2 + c1

212

a2 + b1 + c2

122

a1 + b2 + c2

211

a2 + b1 + c1

121

a1 + b2 + c1

112

a1 + b1 + c2

111

a1 + b1 + c1

Identifying undominated pairs
PAPRIKA’s first step is to identify the undominated pairs. With just eight alternatives this can be done by pairwise comparing all of them visàvis each other and discarding dominated pairs.
This simple approach can be represented by the matrix in Figure 2, where the eight possible alternatives (in bold) are listed down the lefthand side and also along the top. Each alternative on the lefthand side is pairwise compared with each alternative along the top with respect to which of the two alternatives is higher ranked (i.e. in the present example, which candidate is more desirable for the job). The cells with hats (^) denote dominated pairs (where no judgement is required) and the empty cells are either the central diagonal (each alternative pairwise ranked against itself) or the inverse of the nonempty cells containing the undominated pairs (where a judgement is required).
Figure 2: Undominated pairs identified by pairwise comparing the eight possible alternatives (emboldened)
vs

222

221

212

122

112

121

211

111

222


^

^

^

^

^

^

^

221



(i) b2 + c1 vs b1 + c2

(ii) a2 + c1 vs a1 + c2

(iv) a2 + b2 + c1 vs a1 + b1 + c2

^

^

^

212




(iii) a2 + b1 vs a1 + b2

^

(v) a2 + b1 + c2 vs a1 + b2 + c1

^

^

122





^

^

(vi) a1 + b2 + c2 vs a2 + b1 + c1

^

112






(*i) b1 + c2 vs b2 + c1

(*ii) a1 + c2 vs a2 + c1

^

121







(*iii) a1 + b2 vs a2 + b1

^

211








^

111









Figure 2 notes: ^ denotes dominated pairs. The undominated pairs are labelled with Roman numerals; the three with asterisks are duplicates of pairs (i)(iii).
As summarised in Figure 2, there are nine undominated pairs (labelled with Roman numerals). However, three are duplicates after any variables common to a pair are ‘cancelled’ (e.g. pair *i is a duplicate of pair i, etc.). Thus, there are six unique undominated pairs (without asterisks in Figure 2, and listed later below).
The cancellation of variables common to undominated pairs can be illustrated as follows. When comparing alternatives 121 and 112, for example, a1 can be subtracted from both sides of a1 + b2 + c1 vs a1 + b1 + c2. Similarly, when comparing 221 and 212, a2 can be subtracted from both sides of a2 + b2 + c1 vs a2 + b1 + c2. For both pairs this leaves the same ‘cancelled’ form: b2 + c1 vs b1 + c2.
Formally, these subtractions reflect the ‘jointfactor’ independence property of additive value models:^{[35]} the ranking of undominated pairs (in uncancelled form) is independent of their tied rankings on one or more criteria. Notationally, undominated pairs in their cancelled forms, like b2 + c1 vs b1 + c2, are also representable as _21 ‘‘vs’’ _12 – i.e. where ‘_’ signifies identical categories for the identified criterion.
In summary, here are the six undominated pairs for the value model:

(i) b2 + c1 vs b1 + c2

(ii) a2 + c1 vs a1 + c2

(iii) a2 + b1 vs a1 + b2

(iv) a2 + b2 + c1 vs a1 + b1 + c2

(v) a2 + b1 + c2 vs a1 + b2 + c1

(vi) a1 + b2 + c2 vs a2 + b1 + c1
The task is to pairwise rank these six undominated pairs, with the objective that the decisionmaker is required to perform the fewest pairwise rankings possible (thereby minimising the elicitation burden).
Ranking undominated pairs and identifying implicitly ranked pairs
Undominated pairs with just two criteria are intrinsically the least cognitively difficult for the decisionmaker to pairwise rank relative to pairs with more criteria. Thus, arbitrarily beginning here with pair (i) b2 + c1 vs b1 + c2, the decisionmaker is asked: “Which alternative do you prefer, _21 or _12 (i.e. given they’re identical on criterion a), or are you indifferent between them?” This choice, in other words, is between a candidate with good experience and poor references and another with poor experience and good references, all else the same.
Suppose the decisionmaker answers: “I prefer _21 to _12” (i.e. good experience and poor references is preferred to poor experience and good references). This preference can be represented by ‘_21 ≻_12’, which corresponds, in terms of total score equations, to b2 + c1 > b1 + c2 [where ‘≻’ and ‘~’ (used later) denote strict preference and indifference respectively, corresponding to the usual relations ‘>’ and ‘=’ for the total score equations].
Central to the PAPRIKA method is the identification of all undominated pairs implicitly ranked as corollaries of the explicitly ranked pairs. Thus, given a2 > a1 (i.e. good education ≻ poor education), it is clear that (i) b2 + c1 > b1 + c2 (as above) implies pair (iv) (see Figure 2) is ranked as a2 + b2 + c1 > a1 + b1 + c2. This reflects the transitivity property of (additive) value models. Specifically, 221≻121 (by dominance) and 121≻112 (i.e. pair i _21≻_12, as above) implies (iv) 221≻112; equivalently, 212≻112 and 221≻212 implies 221≻112.
Next, corresponding to pair (ii) a2 + c1 vs a1 + c2, suppose the decisionmaker is asked: “Which alternative do you prefer, 1_2 or 2_1 (given they’re identical on criterion b), or are you indifferent between them?” This choice, in other words, is between a candidate with poor education and good references and another with good education and poor references, all else the same.
Suppose the decisionmaker answers: “I prefer 1_2 to 2_1” (i.e. poor education and good references is preferred to good education and poor references). This corresponds to a1 + c2 > a2 + c1. Also, given b2 > b1 (good experience ≻ poor experience), this implies pair (vi) is ranked as a1 + b2 + c2 > a2 + b1 + c1.
Furthermore, the two explicitly ranked pairs (i) b2 + c1 > b1 + c2 and (ii) a1 + c2 > a2 + c1 imply pair (iii) is ranked as a1 + b2 > a2 + b1. This can easily be seen by adding the corresponding sides of the inequalities for pairs (i) and (ii) and cancelling common variables. Again, this reflects the transitivity property: (i) 121≻112 and (ii) 112≻211 implies (iii) 121≻211; equivalently, 122≻221 and 221≻212 implies 122≻212.
As a result of two explicit pairwise comparisons – i.e. explicitly performed by the decisionmaker – five of the six undominated pairs have been ranked. The decisionmaker may cease ranking whenever she likes (before all undominated pairs are ranked), but let’s suppose she continues and ranks the remaining pair (v) as a2 + b1 + c2 > a1 + b2 + c1 (i.e. in response to an analogous question to the two spelled out above).
Thus, all six undominated pairs have been ranked as a result of the decisionmaker explicitly ranking just three:

(i) b2 + c1 > b1 + c2

(ii) a1 + c2 > a2 + c1

(v) a2 + b1 + c2 > a1 + b2 + c1
The overall ranking of alternatives and point values
Because the three pairwise rankings above are consistent – and all n (n−1)/2 = 28 pairwise rankings (n = 8) for this simple value model are known – a complete overall ranking of all eight possible alternatives is defined (1st to 8th): 222, 122, 221, 212, 121, 112, 211, 111.
Simultaneously solving the three inequalities above (i, ii, v), subject to a2 > a1, b2 > b1 and c2 > c1, gives the point values (i.e. the ‘points system’), reflecting the relative importance of the criteria to the decisionmaker. For example, one solution is: a1 = 0, a2 = 2, b1 = 0, b2 = 4, c1 = 0 and c2 = 3 (or normalised so the ‘best’ alternative, 222, scores 100 points: a1 = 0, a2 = 22.2, b1 = 0, b2 = 44.4, c1 = 0 and c2 = 33.3).
Thus, in the context of the example of a value model for ranking candidates for a job, the most important criterion is revealed to be (good) experience (b, 4 points) followed by references (c, 3 points) and, least important, education (a, 2 points). Although multiple solutions to the three inequalities are possible, the resulting point values all reproduce the same overall ranking of alternatives as listed above and reproduced here with their total scores:

1st 222: 2 + 4 + 3 = 9 points (or 22.2 + 44.4 + 33.3 = 100 points normalised) – i.e. total score from adding the point values above.

2nd 122: 0 + 4 + 3 = 7 points (or 0 + 44.4 + 33.3 = 77.8 points normalised)

3rd 221: 2 + 4 + 0 = 6 points (or 22.2 + 44.4 + 0 = 66.7 points normalised)

4th 212: 2 + 0 + 3 = 5 points (or 22.2 + 0 + 33.3 = 55.6 points normalised)

5th 121: 0 + 4 + 0 = 4 points (or 0 + 44.4 + 0 = 44.4 points normalised)

6th 112: 0 + 0 + 3 = 3 points (or 0 + 0 + 33.3 = 33.3 points normalised)

7th 211: 2 + 0 + 0 = 2 points (or 22.2 + 0 + 0 = 22.2 points normalised)

8th 111: 0 + 0 + 0 = 0 points (or 0 + 0 + 0 = 0 points normalised)
Other things worthwhile noting
First, the decisionmaker may decline to explicitly rank any given undominated pair (thereby excluding it) on the grounds that at least one of the alternatives considered corresponds to an impossible combination of the categories on the criteria. Also, if the decisionmaker cannot decide how to explicitly rank a given pair, she may skip it – and the pair may eventually be implicitly ranked as a corollary of other explicitly ranked pairs (via transitivity).
Second, in order for all undominated pairs to be ranked, the decisionmaker will usually be required to perform fewer pairwise ranking if some indicate indifference rather than strict preference. For example, if the decisionmaker had ranked pair (i) above as _21~_12 (i.e. indifference) instead of _21≻_12 (as above), then she would have needed to rank only one more pair rather than two (i.e. just two explicitly ranked pairs in total). On the whole, indifferently ranked pairs generate more corollaries with respect to implicitly ranked pairs than pairs that are strictly ranked.
Finally, the order in which the decisionmaker ranks the undominated pairs affects the number of rankings required. For example, if the decisionmaker had ranked pair (iii) before pairs (i) and (ii) then it is easy to show that all three would have had to be explicitly ranked, as well as pair (v) (i.e. four explicitly ranked pairs in total). However, determining the optimal order is problematical as it depends on the rankings themselves, which are unknown beforehand.
Applying PAPRIKA to ‘larger’ value models
Of course, most realworld value models have more criteria and categories than the simple example above, which means they have many more undominated pairs. For example, the value model referred to earlier with eight criteria and four categories within each criterion (and 4^{8} = 65,536 possible alternatives) has 2,047,516,416 undominated pairs in total (analogous to the nine identified in Figure 2), of which, excluding replicas, 402,100,560 are unique (analogous to the six in the example above).^{[1]} (As mentioned earlier, for a model of this size the decisionmaker is required to explicitly rank approximately 95 pairs defined on two criteria atatime, which most decisionmakers are likely to be comfortable with.)
For such realworld value models, the simple pairwisecomparisons approach to identifying undominated pairs used in the previous subsection (represented in Figure 2) is highly impractical. Likewise, identifying all pairs implicitly ranked as corollaries of the explicitly ranked pairs becomes increasingly intractable as the numbers of criteria and categories increase. The PAPRIKA method, therefore, relies on computationally efficient processes for identifying unique undominated pairs and implicitly ranked pairs respectively. The details of these processes are beyond the scope of this article, but are available elsewhere.^{[1]}
How does PAPRIKA compare with traditional scoring methods?
PAPRIKA entails a greater number of judgments (but typically fewer than 100 and often fewer than 50^{[1]}) than most ‘traditional’ scoring methods, such as direct rating,^{[36]} SMART,^{[37]} SMARTER^{[38]} and the Analytic Hierarchy Process.^{[39]} Clearly, though, different types of judgments are involved. For PAPRIKA, the judgements entail pairwise comparisons of undominated pairs (usually defined on just two criteria atatime), whereas most traditional methods involve interval scale or ratio scale measurements of the decisionmaker’s preferences with respect to the relative importance of criteria and categories respectively. Arguably, the judgments for PAPRIKA are simpler and more natural, and therefore they might reasonably be expected to reflect decisionmakers’ preferences more accurately.
See also
References

^ ^{a} ^{b} ^{c} ^{d} ^{e} ^{f} ^{g} ^{h} Hansen, Paul; Ombler, Franz (2008). "A new method for scoring additive multiattribute value models using pairwise rankings of alternatives". Journal of MultiCriteria Decision Analysis 15 (3–4): 87.

^ Wagstaff, Jeremy (21 September 2005). "Asian Innovation Awards: Contenders Stress Different Ways of Thinking". The Asian Wall Street Journal.

^ Taylor, William J.; Laking, George (2010). "Value for money – recasting the problem in terms of dynamic access prioritisation". Disability & Rehabilitation 32 (12): 1020.

^ Hansen, Paul; Hendry, Alison; Naden, Ray; Ombler, Franz; Stewart, Ralph (2012). "A new process for creating points systems for prioritising patients for elective health services". Clinical Governance: an International Journal 17 (3): 200.

^ Fitzgerald, Avril; Spady, Barbara Conner; DeCoster, Carolyn; Naden, Ray; Hawker, Gillian A.; Noseworthy, Thomas (October 2009). "WCWL Rheumatology Priority Referral Score Reliability and Validity Testing". Arthritis & Rheumatism 60 (Suppl 10): 54.

^ Fitzgerald, Avril; De Coster, Carolyn; McMillan, Stewart; Naden, Ray; Armstrong, Fraser; Barber, Alison; Cunning, Les; ConnerSpady, Barbara; Hawker, Gillian; Lacaille, Diane; Lane, Carolyn; Mosher, Dianne; Rankin, Jim; Sholter, Dalton; Noseworthy, Tom (2011). "Relative urgency for referral from primary care to rheumatologists: The Priority Referral Score". Arthritis Care & Research 63 (2): 231–9.

^ Noseworthy, T; De Coster, C; Naden, R (2009). "Prioritysetting tools for improving access to medical specialists". 6th Health Technology Assessment International Annual Meeting. Annals, Academy of Medicine, Singapore 38 (Singapore): S78.

^ Neogi, Tuhina; Aletaha, Daniel; Silman, Alan J.; Naden, Raymond L.; Felson, David T.; Aggarwal, Rohit; Bingham, Clifton O.; Birnbaum, Neal S.; Burmester, Gerd R.; Bykerk, Vivian P.; Cohen, Marc D.; Combe, Bernard; Costenbader, Karen H.; Dougados, Maxime; Emery, Paul; Ferraccioli, Gianfranco; Hazes, Johanna M. W.; Hobbs, Kathryn; Huizinga, Tom W. J.; Kavanaugh, Arthur; Kay, Jonathan; Khanna, Dinesh; Kvien, Tore K.; Laing, Timothy; Liao, Katherine; Mease, Philip; Ménard, Henri A.; Moreland, Larry W.; Nair, Raj; Pincus, Theodore (2010). "The 2010 American College of Rheumatology/European League Against Rheumatism classification criteria for rheumatoid arthritis: Phase 2 methodological report". Arthritis & Rheumatism 62 (9): 2582.

^ Van Den Hoogen, F.; Khanna, D.; Fransen, J.; Johnson, S. R.; Baron, M.; Tyndall, A.; MatucciCerinic, M.; Naden, R. P.; Medsger, T. A.; Carreira, P. E.; Riemekasten, G.; Clements, P. J.; Denton, C. P.; Distler, O.; Allanore, Y.; Furst, D. E.; Gabrielli, A.; Mayes, M. D.; Van Laar, J. M.; Seibold, J. R.; Czirjak, L.; Steen, V. D.; Inanc, M.; KowalBielecka, O.; MüllerLadner, U.; Valentini, G.; Veale, D. J.; Vonk, M. C.; Walker, U. A. et al. (2013). "2013 Classification Criteria for Systemic Sclerosis: An American College of Rheumatology/European League Against Rheumatism Collaborative Initiative". Arthritis & Rheumatism 65 (11): 2737.

^ Van Den Hoogen, F.; Khanna, D.; Fransen, J.; Johnson, S. R.; Baron, M.; Tyndall, A.; MatucciCerinic, M.; Naden, R. P.; Medsger, T. A.; Carreira, P. E.; Riemekasten, G.; Clements, P. J.; Denton, C. P.; Distler, O.; Allanore, Y.; Furst, D. E.; Gabrielli, A.; Mayes, M. D.; Van Laar, J. M.; Seibold, J. R.; Czirjak, L.; Steen, V. D.; Inanc, M.; KowalBielecka, O.; MullerLadner, U.; Valentini, G.; Veale, D. J.; Vonk, M. C.; Walker, U. A. et al. (2013). "2013 classification criteria for systemic sclerosis: An American college of rheumatology/European league against rheumatism collaborative initiative". Annals of the Rheumatic Diseases 72 (11): 1747–55.

^ Johnson, S. R.; Naden, R. P.; Fransen, J.; Van Den Hoogen, F.; Pope, J. E.; Baron, M.; Tyndall, A.; MatucciCerinic, M.; Denton, C. P.; Distler, O.; Gabrielli, A.; Van Laar, J. M.; Mayes, M.; Steen, V.; Seibold, J. R.; Clements, P.; Medsger, T. A.; Carreira, P. E.; Riemekasten, G.; Chung, L.; Fessler, B. J.; Merkel, P. A.; Silver, R.; Varga, J.; Allanore, Y.; MuellerLadner, U.; Vonk, M. C.; Walker, U. A.; Cappelli, S.; Khanna, D. (2014). "Multicriteria decision analysis methods with 1000Minds for developing systemic sclerosis classification criteria". Journal of Clinical Epidemiology 67 (6): 706.

^ Golan, Ofra; Hansen, Paul; Kaplan, Giora; Tal, Orna (2011). "Health technology prioritization: Which criteria for prioritizing new technologies and what are their relative weights?". Health Policy 102 (2–3): 126–35.

^ Golan, Ofra G; Hansen, Paul (2012). "Which health technologies should be funded? A prioritization framework based explicitly on value for money". Israel Journal of Health Policy Research 1 (1): 44.

^ Taylor, W. J.; Singh, J. A.; Saag, K. G.; Dalbeth, N.; MacDonald, P. A.; Edwards, N. L.; Simon, L. S.; Stamp, L. K.; Neogi, T.; Gaffo, A. L.; Khanna, P. P.; Becker, M. A.; Schumacher Jr, H. R. (2011). "Bringing It All Together: A Novel Approach to the Development of Response Criteria for Chronic Gout Clinical Trials". The Journal of Rheumatology 38 (7): 1467–70.

^ Taylor, William J.; Brown, Melanie; Aati, Opetaia; Weatherall, Mark; Dalbeth, Nicola (2013). "Do Patient Preferences for Core Outcome Domains for Chronic Gout Studies Support the Validity of Composite Response Criteria?". Arthritis Care & Research 65 (8): 1259.

^ Dobson, F.; Hinman, R.S.; Roos, E.M.; Abbott, J.H.; Stratford, P.; Davis, A.M.; Buchbinder, R.; SnyderMackler, L.; Henrotin, Y.; Thumboo, J.; Hansen, P.; Bennell, K.L. (2013). "OARSI recommended performancebased tests to assess physical function in people diagnosed with hip or knee osteoarthritis". Osteoarthritis and Cartilage 21 (8): 1042–52.

^ Nicolson, P. J.; French, S. D.; Hinman, R. S.; Hodges, P. W.; Dobson, F. L.; Bennell, K. L. (2014). "Developing key messages for people with osteoarthritis: A delphi study". Osteoarthritis and Cartilage 22: S305.

^ Ruhland, Johannes (2006). "Strategic mobilization: What strategic management can learn from social movement research". Management 11 (44): 23–31.

^ Smith, C (2009), "Revealing monetary policy preferences", Reserve Bank of New Zealand Discussion Paper Series, DP2009/01;

^ Smith, K. F.; Fennessy, P. F. (2011). "The use of conjoint analysis to determine the relative importance of specific traits as selection criteria for the improvement of perennial pasture species in Australia". Crop and Pasture Science 62 (4): 355–65.

^ Smith, K. F.; Fennessy, P. F. (2014). "Utilizing Conjoint Analysis to Develop Breeding Objectives for the Improvement of Pasture Species for Contrasting Environments when the Relative Values of Individual Traits Are Difficult to Assess". Sustainable Agriculture Research 3 (2).

^ Byrne, T. J.; Amer, P. R.; Fennessy, P. F.; Hansen, P.; Wickham, B. W. (2011). "A preferencebased approach to deriving breeding objectives: Applied to sheep breeding". Animal 6 (5): 778–88.

^ Boyd, Philip; Law, Cliff; Doney, Scott (2011). "A Climate Change Atlas for the Ocean". Oceanography 24 (2): 13–6.

^ Chhun, Sophal; Thorsnes, Paul; Moller, Henrik (2013). "Preferences for Management of NearShore Marine Ecosystems: A Choice Experiment in New Zealand". Resources 2 (3): 406–438.

^ Graff, P.; McIntyre, S. (2014). "Using ecological attributes as criteria for the selection of plant species under three restoration scenarios". Austral Ecology: n/a.

^ Crozier, G. K. D.; SchulteHostedde, A. I. (2014). "Towards Improving the Ethics of Ecological Research". Science and Engineering Ethics.

^ Uldana Baizyldayeva; Oleg Vlasov; Abu A. Kuandykov; Turekhan B. Akhmetov (2013). "MultiCriteria Decision Support Systems. Comparative Analysis". MiddleEast Journal of Scientific Research 16 (12): 1725–1730.

^ Karlin, B.; Davis, N.; Sanguinetti, A.; Gamble, K.; Kirkby, D.; Stokols, D. (2012). "Dimensions of Conservation: Exploring Differences Among Energy Behaviors". Environment and Behavior 46 (4): 423.

^ Hansen, P.; Kergozou, N.; Knowles, S.; Thorsnes, P. (2014). "Developing Countries in Need: Which Characteristics Appeal most to People when Donating Money?". The Journal of Development Studies: 1.

^ Belton, V and Stewart, TJ, Multiple Criteria Decision Analysis: An Integrated Approach, Kluwer: Boston, 2002.

^ Johnson, Richard M. (1976). "Beyond conjoint measurement: A method of pairwise tradeoff analysis". Advances in Consumer Research 3: 353–8.

^ Green, P. E.; Krieger, A. M.; Wind, Y. (2001). "Thirty Years of Conjoint Analysis: Reflections and Prospects". Interfaces 31: S56.

^ Larichev, O.I.; Moshkovich, H.M. (1995). "ZAPROSLM — A method and system for ordering multiattribute alternatives". European Journal of Operational Research 82 (3): 503.

^ Moshkovich, Helen M; Mechitov, Alexander I; Olson, David L (2002). "Ordinal judgments in multiattribute decision analysis". European Journal of Operational Research 137 (3): 625.

^ Krantz, D. H. (1972). "Measurement Structures and Psychological Laws". Science 175 (4029): 1427–35.

^ Von Winterfeldt, D and Edwards, W, Decision Analysis and Behavioral Research, Cambridge University Press: New York, 1986.

^ Edwards, Ward (1977). "How to Use Multiattribute Utility Measurement for Social Decisionmaking". IEEE Transactions on Systems, Man, and Cybernetics 7 (5): 326.

^ Edwards, Ward; Barron, F.Hutton (1994). "SMARTS and SMARTER: Improved Simple Methods for Multiattribute Utility Measurement". Organizational Behavior and Human Decision Processes 60 (3): 306.

^ Saaty, TL, The Analytic Hierarchy Process, McGrawHill: New York, 1980.
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.