This discussion, like so many others in this forum, has become mired in GOAT talk. To return to the original post, the year-end #1 ranking is a valuable statistic because it can be used to compare individual achievement across generations. The fact that Pancho Gonzalez was a 7 time No. 1 says much about his place in the game's history. Similarly, Sampras's six straight year-end #1's are now recognized as his greatest single achievement.
It seems to me that the point of the original post is that this does not make all No.1 seasons equally impressive, and I think that view is correct. Justine Henin was the WTA's #1 for both the 2006 and 2007 seasons. In 2006 she was just barely ahead of Amelie Mauresmo, who won two majors as opposed to Henin's one (although Henin reached the final of all four). In 2007, however, she was truly dominant and was compared to Federer's performance in one of his great years. For his part, Federer was clearly more dominant in 2006 than in 2007, although the latter was still one of the greatest seasons in the Open era (he won three majors, reached the final of the fourth and also won the WTF).
I see no reason why it is in principle impossible to rank seasons like this. The ranking will reflect achievements such as majors won/finals reached, WTF and Masters titles, other titles, overall winning percentage in all matches, etc. There is certainly room for debate about the appropriate weight to be given to each of these criteria. However, while we might legitimately disagree about which is Borg's or Lendl's or Nadal's best year, it's much harder to do so about McEnroe's, Federer's or Djokovic's. Nor can one maintain that Sampras's 1998 season or Kuerten's in 2000 was as strong as any of Federer's, Nadal's or Djokovic's No. 1 seasons unless you want to argue that all comparisons across eras are impossible. This is not an exercise in pure subjectivity in the way that most strong/weak era claims tend to be.
If we want a true measure of dominance, weeks at No.1 (or rather, consecutive weeks at No. 1) seems to me to provide more accurate information than year end #1's. The disadvantage is that this statistic is only available from the mid 1970's onwards, and does not become really accurate for another decade after that. It is not a substitute for year end #1 rankings, but there is no reason why we cannot use both criteria. Year-end #1 rankings are one of the best measures of career achievement, but neither this nor any other single metric is perfect or tells the whole story.