Using MAUT in Policy Development

From Pirate Party Australia Wiki
Jump to navigation Jump to search

Using MAUT in Policy Development

Development of PPAU policy is not trivial. On any given issue, there are many potential policies to choose from and many criteria to be weighed up when judging their suitability.

To assist us with this process, we make use of a special kind of decision table, based on Multi-Attribute Utility Theory. That may sound complicated, but really it's just a table in a spreadsheet where we layout the options, weight them up and rate their importance. If you're interested in the underlying theory: https://wiki.ece.cmu.edu/ddl/index.php/Multiattribute_utility_theory .

The Purpose

  • Individually: To make us think carefully about:
    • The criteria that be used to select our budding new policy.
    • The relative importance of each criteria.
    • The potential policy choices we should consider
    • How well each policy choice compares against our criteria.
  • Collectively: To achieve consensus:
    • On all of the above.
    • To produce a numeric result showing the winner according to us all.

An Illustrated Primer

The Summary Section of the Spreadsheet

Here's a screenshot from the top of a MAUT spreadsheet... MAUT Spread2.png

Some things to notice:

  • The big scary RED "Statistics (Don't Edit)" box at the top.
    • It really means you should REALLY NOT TRY TO EDIT THIS PART.
    • That's there because each one of these spreadsheets is for several people to contribute. This area at the top is just the summary showing the average result from everyone that contributes.
    • One of the great things about Google Docs Spreadsheets is that many people can edit them at the same time when they are set to be "Shared". This means everyone can enter values into their own area and we can all see our collective results here.
  • The Title "Copyright Term (Average)" with the salmon-pink background.
    • This is telling you that the subject of the policy under consideration is "Copyright Term" and that this table right here is the average results.
  • The "Criteria" circled in Red.
    • These are the criteria we have agreed upon as the basis for judging the effectiveness of the policies under consideration.
    • Don't worry about whether you think these are the complete and correct criteria for this topic - this is just a sample.
    • Just to the right of the criteria is another column titled "Fits PPAU Values". If every individual contributor puts a "Yes" next to that criteria in their own table, then these will say "Yes" also. Otherwise they will be blank. This would indicate someone thought there was as problem with the criteria.
  • The comment circled in Orange.
    • It is considered good practice to attach comments to each criteria, helping users of the spreadsheet to understand what is really meant in each case.
    • Here in this example, my cursor was hovering over the "Promotion of Culture" criteria, so a helpful description appears.
  • The list of "CriteriaWeight" circled in Grey.
    • These are the average of the weights that have been assigned to each of the criteria by all contributors.
    • Think of these as our collective position on the relative importance of each criteria.
  • The Proposals across the top, circled in Black.
    • These represent each of the potential policies under consideration.
    • It is also considered good practice to attach comments to each of these, if they are anything but totally obvious.
    • Sometimes, as in this case, each of these are independent choices where we intend to choose just one of them. On other occasions, the decision is about which of many sub-policies to be included (for example, when building policy for a Bill of Rights, we listed each of the rights that we might want to include across here).
  • The totals across the bottom, circled in a Brownish looking Grey
    • This is where you look to see which policy is winning right now.
    • In this sample, the "5 year Term" is ahead by a slight margin with a total score of 211.12, but it's pretty close to the 10 year term result of 210.32.
    • Looking up, you will see columns for "Rating" and "*Weight" under each proposal.
      • This is an "Averages" table, so each "Rating" is the average of the ratings that have been assigned by each contributor.
      • The "*Weight" column is where we've multiplied the average rating by the Average CriteriaWeight. This makes sense, because a high rating for an insignificant criteria can be more important than a smaller rating on a very important criteria.
      • So, we've summed the (Average(CriteriaWeight) * Average(Rating)) to see the winner.

An Individual Contributor Section of the Spreadsheet

Here's a screenshot from a Contributor section further down in a MAUT spreadsheet... MAUT Spread3.png

Some things to notice:

  • The big scary RED "User Contributions (Edit the White cells in the Table with your name)" box at the top.
    • Don't be scared. These tables laid out below here are for your contributions.
  • Where it says "Person1" circled in Red.
    • If there's already one of these tables with your name on it, then use that one.
      • If not, look for an unused one (probably named "Person1" or "Free" or similar) and use that.
      • Put your name in place of "Person1" to claim it.
  • The Criteria on the left:
    • These may already be created if you're coming late to the party, but...
      • If you disagree with any of them, put a "No" in the "Fits PPAU Values" column (Circled in Orange) next to the questionable criteria.
      • If you think there should be more criteria, then add one to your list.
        • WARNING: This is currently non-trivial, since every contributors list needs to be the same and so do the summary tables at the top and all of the formulas need to be adjusted to make it all work right. If you don't know how to do this, just type in your own items and tell the Working Group leader to help you out. Yeah, we know this sucks. We're working on it. There are Python interfaces to the spreadsheets and ultimately the Polly system will render these spreadsheets obsolete.
  • The Proposals on the top:
    • These may already be created if you're coming late to the party, but...
      • If you disagree with any of them, then you can rate them badly and that will stand out for everyone to see. Do not try to remove them.
      • If you think there should be more proposals, then add them on the right.
        • WARNING: This is currently non-trivial, since every contributors list needs to be the same and so do the summary tables at the top and all of the formulas need to be adjusted to make it all work right. If you don't know how to do this, just type in your own items and tell the Working Group leader to help you out. Yeah, we know this sucks. We're working on it. There are Python interfaces to the spreadsheets and ultimately the Polly system will render these spreadsheets obsolete.
  • The CriteriaWeights are Circled in Green:
    • These CriteriaWeights should all start out with a value of 5.
    • Your First Mission is to move value from one criteria to another until the relative importance of each criteria seems right to you.
      • So, if you add 1 to the "Content Creation Volume" CriteriaWeight, then you have to take 1 away from somewhere else.
      • If you fail to do this properly, the total at the bottom (Circled in blue) will not be 5.00.
      • This will also stand out in the summary tables up the top.
  • The Ratings are Circled in Purple:
    • These are scores out of 10. There is build-in validation that should only allow values 0-10 in there.
      • The scores are always in the positive sense of each criteria. A higher values always refers to a more preferable outcome.
    • Your Second Mission is to assign ratings for each Proposal against each Criteria.
      • In this example, I have assigned 10 to "Content Creation Volume" for a "20 Year Term" policy, but only 5 to the "Indefinite Extension of Term" policy, because I think that such a policy interferes with creative endeavours by making it hard for people to draw on past creative works and acts as a financial disincentive for published creators to 'rest on their laurals'.
    • Once you have accomplished both missions, you can look to the totals Circled in Black
      • Biggest number wins
      • If the result surprises you may want to go back and review your weights and ratings.
      • This is a really good time to compare your results against others.
        • Next section discusses how to compare your results with others

The Summary Section of the Spreadsheet

Now we're on to the Collective Mission of achieving Consensus Between Contributors.

You can start by simply scrolling around and seeing how other peoples totals differ from your own, but this doesn't tell you enough to really have the right conversations to achieve consensus. For a more focussed comparison, we can look into the collective differences in the details of our CriteriaWeightings and our Proposal Ratings, preferably in that order. Here's how ...

Here's a screenshot from the "Avedev Weights" and AveDev Ratings" tables near the top of a MAUT spreadsheet...

MAUT Spread4.png

Avedev Weights:

  • These tables are structures just like your individual contributor table.
  • The Avedev Weights table tells us about the differences in contributors Criteria Weights
    • The CriteriaWeights column Circled in Red is the average deviation of all contributor weights.
      • Think of this as the average difference of contributor weights from the average. So, there's an average in the middle somewhere (shown in the average table) then these numbers show how widely the weights are spread around that. It's like how much we agree on the Weights.
      • If a CriteriaWeight in Avedev Weights is zero, then we totally agree.
      • If they're greater than 1, then there may be a good basis for a focussed discussion.
        • Talk to your fellow working group members about their understanding of the criteria.
        • If you all agree on what the criteria really means, then probably leave the scores alone. If not, then document the new collective understanding (in a comment on the criteria cell) and everyone should go back to revise their weightings appropriate to the new understanding.
    • The "AveDev Weights" totals row Circled in Blue tells us the total effect on the result of difference in weights.
      • This is achieved by leaving all of the ratings as the average ratings, combined with the average deviation criteria weights.
      • The scale of these totals compared to the totals in the Averages table above, give you some idea of whether it's even worth discussing the weights any further.

Avedev Ratings:

  • The Avedev Ratings table tells us about the differences in contributors ratings
    • The Rating columns Circled in Orange are the average deviation of all contributor ratings.
      • Think of this as the average difference of contributor ratings from the average. So, there's an average in the middle somewhere (shown in the average table) then these numbers show how widely the weights are spread around that. It's like how much we agree on the Ratings.
      • If a Ratings in Avedev Ratings is zero, then we totally agree.
      • If they're greater than 1, then there may be a good basis for a focussed discussion.
        • Talk to your fellow working group members about their understanding of how the proposal and criteria relate. It's likely that there are differences in your understanding how how each proposed policy would work that are causing this difference in rating.
        • If you all agree on how the proposals and criteria relate, then probably leave the scores alone. If not, then document the new collective understanding (in a comment on the Proposal cell) and everyone should go back to revise their ratings appropriate to the new understanding.
    • The "AveDev Ratings" totals row Circled in Purple tells us the total effect on the result of difference in Ratings.
      • This is achieved by leaving all of the CriteriaWeights as the average, combined with the average deviation ratings.
      • The scale of these totals compared to the totals in the Averages table above, give you some idea of whether it's even worth discussing the ratings any further.