Handicap Review Guidance

rulefan

Tour Winner
Joined
Feb 21, 2013
Messages
16,276
Visit site
Assessing a player's performance relative to expected scoring pattern.
Does anyone know what the criteria are for determining such scores?
 
As a matter of interest, I was flagged on our Annual Review report from WHS. In the 2022 full year, I had 137 entries on my record.
 
This was my explanation of ‘how it works’ to a member that we had cut a shot. His name was flagged by the WHS report.
My interpretation is that the WHS holds an algorithm of an average range of scores for a set period of time and your pattern does not follow that specific graph. However, that is my personal interpretation, and I may be talking nonsense and have totally misunderstood the method of how the report works. But, the guidance is clear that the Committee has to consider those members’ handicaps who come up in the report.​
 
I dont think that Near Hull is too far away.

What no one knows however is the statistical definition of the "expected scoring range" that a players scores are being compared to.

I would suggest that the expected scoring range will vary by handicap index, in that lower indexes will have a narrower expected range compared to say 20+ handicaps who will be far more variable. No doubt they've paid some statistician to come up with something impenetrable to "joe bloggs committee man" as a side job whilst they were determining the PCC algorithm.
 
Having done the annual review back in December it really did leave me wondering who worked out the algorithm for players appearing on the report.

Is it still the same as under the UHS when the number of scores on the record was probably about 30 at the highest.

It does not appear to account of those players putting 100+ scores. If a player is putting in that sort of number there are bound to be a very high number outside of the expected range.

We had 44 players on the report and the vast majority were ignored simply because of the very high number of scores on their record. In essence we looked at the number of scores within a range of their H.I. rather than the number outside of the range.
 
Having done the annual review back in December it really did leave me wondering who worked out the algorithm for players appearing on the report.
Agreed, we only had 6 come up, two for a stroke cut, and I'd no problem with them, and neither did the two players.

Then two rapidly improving juniors were suggested for an increase, that got thrown right out and made absolutely no sense

Next was a senior who put in his 3 cards for his initial & first ever handicap, annual review suggests giving him two more shots, how does that work when he's literally just been given that index off the minimum number of holes played?

Finally a decent player who hit his low index back end of the season, all his 8 counting scores were in his last 12 rounds, again give him two back. Offered the guy who refused as he was delighted to hit a new personal low. Point being if WHS is meant to reflect current form, how are you penalising a guy who has been red hot in his last 12 rounds?
 
Point being if WHS is meant to reflect current form, how are you penalising a guy who has been red hot in his last 12 rounds?
We have a 'health' appeal from a player who was suggested as an increase from 7.3 to 8.3. Another committee member has suggested we consider is that 7 out of 8 of his best scores are GP and are all in his last 12 scores. Thoughts?
 
We have a 'health' appeal from a player who was suggested as an increase from 7.3 to 8.3. Another committee member has suggested we consider is that 7 out of 8 of his best scores are GP and are all in his last 12 scores. Thoughts?

Requests for handicap reviews based upon health should only take in to consideration scores that have been made since the illness injury occurred so you need to date this and look at subsequent scores.
 
I think one of the issues here is that the WHS system is trying to make a science out of an art!

The mysterious algorithm points you in one direction which is fine (I am happy to believe it works from a statistical perspective). As previously discussed this may or may not generate the need for a change. However, local knowledge of a balanced handicap committee more often than not provides the insight to make any relevant changes to members (illness, fast-improving juniors, etc).

By suggesting you only review those on the review report may be an attempt to limit workload or interference with WHS but it doesn't reflect the fundamental role which is to try to ensure handicap fairness across the membership.
 
Agree wtih all of your points mijkejohnchapman . Provide they put cards in , WHS will deal fairly well with the fast imprvong junior as it based on his/her best scores . This type of player may well show up in the WHS review report, cos they are likely to have a wide range of scores .

The impact of illness/ injury as it impacts scores will come through , but the hcap may move too slowly ,particulary if its a higher hcap (Pre injury ) player ; this is when the player needs to raise the psosbilty of a review with the committee
 
Last edited:
From an outsider's perspective, the statistical analysis and report is only data for the Committee to consider when making their decisions, it's not the be all and end all.
 
I have a degree in statistics (and I am on my clubs hcap committee) , so whilst I haven't seen the methodology being used , from their description I'm fairly confident I know the general methodolgy of what are doing . It's a universally accepted statistical method used to analyse results data from scientifc/medical experiments for example .

Its not 'some algorithm some guy's come up with ', its based on the actual data on the WHS system. On the database they have thousands of score differentials for each handicap, and so can calculate the range that covers 95% of the score differentials for each handicap, i.e. find the points at which only 2.5 % of the scores are above the higher end of the range and 2.5% are below the lower end of the range . So for a 6 handicap the 95% range of score differentials mght be say 1.4 at the lower end and 10.7 at the top

The players who get flagged up are those who have more than 2.5% of their scores outisde one or other end of the range.

Obviously "some algorithm some guy has come up with" is an off the cuff comment and of course the algorithm will be based on real world historical data.

So...a couple of questions...

1) What description is it that you have seen that leads you to your conclusions regarding the general methodology used?

2) Is 95% a published figure or one you have used for illustrative purposes?

3) Using your example of a 6 handicap and saying that the expected range is from 1.4 to 10.7 (i.e. +/- 4.6) effectively assumes that expected scores will be normally distributed....whereas in reality there will be a significant degree of skewedness (or is it kurtosis? I will happily concede to your qualification in this regard :)) in a players score distribution, simply because the handicap index itself is based on a biased sample of the last twenty scores (i.e. the index is biased towards the better scores) and players will be more likely on any given day to shoot a score worse than their handicap index rather than better. Would you say it is more likely that a 6 h'cap range expected scoring range might be more realistically from 3 to 13? Maybe a 2 handicap might be from 0 to 7? A 20 handicap range might be from 13 to 40?
 
nickjdavis

1) The descrtption is just the one EG gave in their guidance ,plus hints from someone with a bit more knowldege on another forum.
2) The 95 % was used as illustration , i should have made that clear ; England Golf use 2.5% at the lower end and a lower % (unspecified ) for upper end.
3) I agree in practice scores may well be skewed upwards and the ranges probably get wider with increasisng handicap .
 
nickjdavis

1) The descrtption is just the one EG gave in their guidance ,plus hints from someone with a bit more knowldege on another forum.
2) The 95 % was used as illustration , i should have made that clear ; England Golf use 2.5% at the lower end and a lower % (unspecified ) for upper end.
3) I agree in practice scores may well be skewed upwards and the ranges probably get wider with increasisng handicap .

cheers (y)

2.5% at the low end and an even lower proportion at the higher end seems to me would result in most scores would fall within the expected scoring range!!
 
I'm guessing the algorithm is very much work in progress, given WHS is in its very early stages? It (hopefully) does a pretty decent job in many general circumstances, but still is rough around the edges. This highlighting some players that would not be expected, while maybe missing out on some others. For example, if a player had very good competition scores, but the majority of his rounds were GP, and poor scores, would the algorithm pick this player out? Or, does it just group all scores together when looking at them?

This is why we need Committees. If the algorithm was perfect, you could probably just get rid of the Annual Review and get the algorithm to make all adjustments automatically.
 
The algorithm throws up a surprisingly small amount of players. Jim8flog’s 44 players is very much an outlier. Typically, having checked quite a few reports in my county, the proportion of players flagged is about 1-2% of the membership. This may well suggest that WHS is doing its job or that there a lot of players who need examining using all the tools at the Committee’s disposal who cannot be highlighted by via statistics available to the algorithm on the portal.
I would be interested to know how iGolfers handicaps are reviewed and whether the system just automatically applies the report’s recommendation.
 
Top