Attention: Please take a moment to consider our terms and conditions before posting.

League One xG table

12345679»

Comments

  • MrOneLung said:
    Sorry, another question

    in the xG table if in a match the xG is 1.01 versus 1.99 does that go down as a point for a 1-1 draw ? 

    Or does the expected points give the 1.99 a ‘win’ 
    In full awareness that I’m opening myself up for ridicule from certain quarters. If you wanted to be proper nerdy about it and distribute points based on xG… here’s what I’d do.

    Take Stevenage v Charlton yesterday.

    Stevenage had 0.50 xG on 11 shots, averaging 0.045 per shot. Charlton had 0.16 xG on 7 shots, averaging 0.023 per shot.

    You can use a Binomial distribution to figure out some percentages based on the chances that were created during the 90 mins…

    Stevenage: n(shots)=11, p(probability of goal)=0.045
    ~60% scores 0,
    ~31% scores 1,
    ~7% scores 2,
    ~1% scores 3
    and smaller for more…

    Charlton: n=7, p=0.023
    ~84% scores 0,
    ~14% scores 1, 
    ~1% scores 2
    and smaller for more…

    Then multiply those percentages…
    ~51% of 0-0,
    ~26% of 1-0,
    ~8% of 0-1,
    ~4% of 1-1,
    ~6% of 2-0,
    ~1% of 2-1
    and smaller for other results…

    Then add up the scorelines to each result…
    ~33% Stevenage
    ~55% Draw
    ~8% Charlton
    (yes 4% is missing - because of all the rough calculations I’ve done above)

    Then allocate the points based on those outcomes…
    Stevenage (3pts * 0.33 + 1pt * 0.55) = 1.54
    Charlton (3pts * 0.08 + 1pt * 0.55) = 0.79
    The  binomial distribution assumes that p is constant whereas Xg has a different probability for each attempt. 
  • Yes that is correct I made an assumption to make the calculations easier.
  • Stig said:
    Both of those tables are mis-named. They don't show whether teams have overperformed or underperformed at all. What they show is how good xG has been at predicting match outcomes. For some teams, Charlton is a good example, Opta's 'game outcome simulations' seem to be a pretty decent indicator of match day outcomes. Though these tables only show the overall pattern; it could be spot-on or wildly out on a match by match basis, we can't tell from this. For other teams, Exeter and Cambridge are prime examples, the xG model is underperforming as an indicator. 
    In what way is it under performing as an indicator, if actual and expected position are quite different? 

    It's not supposed to necessarily be the same, it's not trying to mirror the league positions. It's giving us an indication of how good a team is at creating chances and preventing chances.

    Don't forget, what's been presented here is not xG per se, but is the outcome of 'simulations based on individual shot xG'. The title of those charts is "Which teams have under/over-performed", I'm saying that this is wrong because we cannot say from the data presented that any of those teams have over or underperformed in any meaningful sense. They have all performed in a certain way and have taken a certain number of points; sometimes through skill, sometimes through luck, mostly through a mix of the two. The idea that any of them have over or underperformed because their results didn't match those of 'simulations' is completely wrong. It's like saying that the weather under-performed or over-performed because it didn't match what was in the weather forecast. Of course, nobody would say that because the whole notion is ridiculous; the weather, just like football teams' results is real-world stuff. The weather forecast, just like 'simulations based on individual shot xG', are not real-world stuff they are predictions which are sometimes good and sometimes not. When predictions don't match reality, that doesn't mean that reality has over or underperformed that means that the system used to make the prediction is not accurate enough. Please don't get me wrong, I'm not saying that the indicators presented here are consistently under-performing, but where there is a disparity between the red and the purple dots that is absolutely what it tells us and nothing else.
  • edited October 13
    Stig said:
    Stig said:
    Both of those tables are mis-named. They don't show whether teams have overperformed or underperformed at all. What they show is how good xG has been at predicting match outcomes. For some teams, Charlton is a good example, Opta's 'game outcome simulations' seem to be a pretty decent indicator of match day outcomes. Though these tables only show the overall pattern; it could be spot-on or wildly out on a match by match basis, we can't tell from this. For other teams, Exeter and Cambridge are prime examples, the xG model is underperforming as an indicator. 
    In what way is it under performing as an indicator, if actual and expected position are quite different? 

    It's not supposed to necessarily be the same, it's not trying to mirror the league positions. It's giving us an indication of how good a team is at creating chances and preventing chances.

    Don't forget, what's been presented here is not xG per se, but is the outcome of 'simulations based on individual shot xG'. The title of those charts is "Which teams have under/over-performed", I'm saying that this is wrong because we cannot say from the data presented that any of those teams have over or underperformed in any meaningful sense. They have all performed in a certain way and have taken a certain number of points; sometimes through skill, sometimes through luck, mostly through a mix of the two. The idea that any of them have over or underperformed because their results didn't match those of 'simulations' is completely wrong. It's like saying that the weather under-performed or over-performed because it didn't match what was in the weather forecast. Of course, nobody would say that because the whole notion is ridiculous; the weather, just like football teams' results is real-world stuff. The weather forecast, just like 'simulations based on individual shot xG', are not real-world stuff they are predictions which are sometimes good and sometimes not. When predictions don't match reality, that doesn't mean that reality has over or underperformed that means that the system used to make the prediction is not accurate enough. Please don't get me wrong, I'm not saying that the indicators presented here are consistently under-performing, but where there is a disparity between the red and the purple dots that is absolutely what it tells us and nothing else.
    Nothing in that chart is a prediction tho. It’s an assessment of historic performance based on indicators other than results, simulated to take the form of a points table.  In this case it’s based on the volume and quality of shots taken.

    XG is by definition not supposed to mirror historic match results. If it did it would be redundant as both a measure of performance and a forecasting indicator.  We would just use results and go from there. So to say that XG is underperforming as an indicator because it isn’t matching up to results does seem to muddy the point…
  • edited October 13
    hezzla said:
    Stig said:
    Stig said:
    Both of those tables are mis-named. They don't show whether teams have overperformed or underperformed at all. What they show is how good xG has been at predicting match outcomes. For some teams, Charlton is a good example, Opta's 'game outcome simulations' seem to be a pretty decent indicator of match day outcomes. Though these tables only show the overall pattern; it could be spot-on or wildly out on a match by match basis, we can't tell from this. For other teams, Exeter and Cambridge are prime examples, the xG model is underperforming as an indicator. 
    In what way is it under performing as an indicator, if actual and expected position are quite different? 

    It's not supposed to necessarily be the same, it's not trying to mirror the league positions. It's giving us an indication of how good a team is at creating chances and preventing chances.

    Don't forget, what's been presented here is not xG per se, but is the outcome of 'simulations based on individual shot xG'. The title of those charts is "Which teams have under/over-performed", I'm saying that this is wrong because we cannot say from the data presented that any of those teams have over or underperformed in any meaningful sense. They have all performed in a certain way and have taken a certain number of points; sometimes through skill, sometimes through luck, mostly through a mix of the two. The idea that any of them have over or underperformed because their results didn't match those of 'simulations' is completely wrong. It's like saying that the weather under-performed or over-performed because it didn't match what was in the weather forecast. Of course, nobody would say that because the whole notion is ridiculous; the weather, just like football teams' results is real-world stuff. The weather forecast, just like 'simulations based on individual shot xG', are not real-world stuff they are predictions which are sometimes good and sometimes not. When predictions don't match reality, that doesn't mean that reality has over or underperformed that means that the system used to make the prediction is not accurate enough. Please don't get me wrong, I'm not saying that the indicators presented here are consistently under-performing, but where there is a disparity between the red and the purple dots that is absolutely what it tells us and nothing else.
    Nothing in that chart is a prediction tho. It’s an assessment of historic performance based on indicators other than results, simulated to take the form of a points table.  In this case it’s based on the volume and quality of shots taken.

    XG is by definition not supposed to mirror historic match results. If it did it would be redundant as both a measure of performance and a forecasting indicator.  We would just use results and go from there. So to say that XG is underperforming as an indicator because it isn’t matching up to results does seem to muddy the point…
    But this isn't xG. Have a look at the small print, they are using xG to run simulations that estimate the points that they expect teams to get. If that's not making a prediction, I don't know what is. 
  • Stig said:
    hezzla said:
    Stig said:
    Stig said:
    Both of those tables are mis-named. They don't show whether teams have overperformed or underperformed at all. What they show is how good xG has been at predicting match outcomes. For some teams, Charlton is a good example, Opta's 'game outcome simulations' seem to be a pretty decent indicator of match day outcomes. Though these tables only show the overall pattern; it could be spot-on or wildly out on a match by match basis, we can't tell from this. For other teams, Exeter and Cambridge are prime examples, the xG model is underperforming as an indicator. 
    In what way is it under performing as an indicator, if actual and expected position are quite different? 

    It's not supposed to necessarily be the same, it's not trying to mirror the league positions. It's giving us an indication of how good a team is at creating chances and preventing chances.

    Don't forget, what's been presented here is not xG per se, but is the outcome of 'simulations based on individual shot xG'. The title of those charts is "Which teams have under/over-performed", I'm saying that this is wrong because we cannot say from the data presented that any of those teams have over or underperformed in any meaningful sense. They have all performed in a certain way and have taken a certain number of points; sometimes through skill, sometimes through luck, mostly through a mix of the two. The idea that any of them have over or underperformed because their results didn't match those of 'simulations' is completely wrong. It's like saying that the weather under-performed or over-performed because it didn't match what was in the weather forecast. Of course, nobody would say that because the whole notion is ridiculous; the weather, just like football teams' results is real-world stuff. The weather forecast, just like 'simulations based on individual shot xG', are not real-world stuff they are predictions which are sometimes good and sometimes not. When predictions don't match reality, that doesn't mean that reality has over or underperformed that means that the system used to make the prediction is not accurate enough. Please don't get me wrong, I'm not saying that the indicators presented here are consistently under-performing, but where there is a disparity between the red and the purple dots that is absolutely what it tells us and nothing else.
    Nothing in that chart is a prediction tho. It’s an assessment of historic performance based on indicators other than results, simulated to take the form of a points table.  In this case it’s based on the volume and quality of shots taken.

    XG is by definition not supposed to mirror historic match results. If it did it would be redundant as both a measure of performance and a forecasting indicator.  We would just use results and go from there. So to say that XG is underperforming as an indicator because it isn’t matching up to results does seem to muddy the point…
    But this isn't xG. Have a look at the small print, they are using xG to run simulations that estimate the points that they expect teams to get. If that's not making a prediction, I don't know what is. 
    It's just xG put into points table form so it's easier to digest at a glance. It's still not a prediction.
  • Perhaps you could explain what part of xG is a 'simulation' then? Their words, not mine. 
  • Stig said:
    Perhaps you could explain what part of xG is a 'simulation' then? Their words, not mine. 
    Sure. They have simulated results based on which team had the highest xG in the game.
  • Stig said:
    Perhaps you could explain what part of xG is a 'simulation' then? Their words, not mine. 
    Sure. They have simulated results based on which team had the highest xG in the game.
    And when those simulated results don't match the reality, it is the simulation that is wrong not the reality. And that is what the disparity between the red and purple dots is measuring. 
  • Stig said:
    Stig said:
    Perhaps you could explain what part of xG is a 'simulation' then? Their words, not mine. 
    Sure. They have simulated results based on which team had the highest xG in the game.
    And when those simulated results don't match the reality, it is the simulation that is wrong not the reality. And that is what the disparity between the red and purple dots is measuring. 
    Neither of them are “wrong”. They’re just showing different things. In this case, where have underlying performances differed from results. 

    There are lots of possible explanations for those differences, but the fact they are different does not by itself show the simulation is flawed.  It's not designed to match up.
  • Sponsored links:


  • Stig said:
    Stig said:
    Perhaps you could explain what part of xG is a 'simulation' then? Their words, not mine. 
    Sure. They have simulated results based on which team had the highest xG in the game.
    And when those simulated results don't match the reality, it is the simulation that is wrong not the reality. And that is what the disparity between the red and purple dots is measuring. 
    xG should not necessarily match results. If it did, then it would be pointless since we already have a results measure.

    Saying xG is wrong because it doesn't match results is like saying the odds of winning the lottery being 1 in 14 million must be wrong because my mate Bob won it last year.

    xG is a measure of probability, not a measure of what actually happens.
  • I'm not saying xG is wrong. I'm saying that simulations based on xG, which is what Opta have done here, will not always be accurate. When Opta's xG based simulations do not match what happens in reality, it is wrong to describe that as teams under or over performing. 
Sign In or Register to comment.

Roland Out Forever!