offers the clearest description that I have seen of how the national benchmark ratings are calculated.
According to Bob Greene USPTA, USTA Certified Verifier & Chairman of USTA NTRP Computer Sub-Committee:
Annually each Section of the USTA will send up to twenty teams at all levels and genders to USA League National Championship Events. Those teams will play against each other in four flights of four or five teams in each flight. The draw is done at random. One event may be Florida, Texas, New England and Hawaii ... the next Eastern, Southern, ******* and Northern California. Before and during these matches, no less than four of the most experienced NTRP Verifiers from different areas of the country will research the players match result history, multi-year rating history and player profile information. They will then observe the players competing against several different teams over a period of three days. All match results are entered into the NTRP Computer during the events. The Verifiers are observing and are more specifically looking for lopsided match results, disparity of level between doubles partners and player improvement over the course of a season. The players who emerge from their respective flights to the semifinal and final rounds are given "absolute ratings". That rating is a number that is static for the purpose of comparison against other players. These "Benchmark Ratings" are entered into the NTRP Computer and the computer program is run calculating ratings for all of the players who competed at the event. Although the NTRP Computer has an excellent track record of being correct, the National Verifiers makes a few adjustments based on reasons stated above. All of these players are National Benchmarks and their Ratings are deemed not changeable by regulation. These National Benchmark Ratings are entered into the NTRP Computer and will filter down in each respective USTA Section and all players competing in NTRP audited and regulated venues will receive a rating if they played two or more matches. The primary goal and purpose of this methodology is to create and maintain uniformity in ratings on a nationwide basis. No matter what the picture appears to be from the bottom looking up, it is painted from the top down.
I believe that the full 1.0 bumps are made in an effort to keep the system from getting too distorted because of fast-improving players, which makes sense. I started playing tennis a year ago and started the season as a legitimate, strong 3.0. I ended the season as a strong 3.5. I got bumped up a full 1.0 after 3.0 nationals. I am currently getting my *ss handed to me on a regular basis by 4.0 players. It's not fun, but I understand why they bumped the fast-improving players: they don't want us screwing up the whole computer system and making it less fun for everyone else. All in all, I think that the people organizing the rating system do an admirable job with a difficult task.