So I'm guessing if he's using one season to sandbag the ratings, then the results for both seasons get factored into a player's dynamic rating. Why would that be allowed by the USTA? Would seem to me that only the season which counts toward qualifying for state/districts/sectionals/nationals would be used for dynamic rating purposes and other match results would not for the very reason that it would lead to thrown match results.
Yes, including other leagues opens the door for this type of sandbagging behavior. To me though, the inconsistency between sections is also an issue.
Some sections pretty much include only the "spring" 18 & over, 40 & over, and 55 & over leagues for NTRP rating purposes. Others have fall/winter leagues that count, then there are tri-level leagues and singles leagues, that count. Then tournaments count in some sections and don't count in others.
I think the reason other leagues get included is that for some players, the bulk of their play is only in these "other" leagues, so not including them means you are basing a player's rating on only a handful of matches and perhaps only 20-30% of those played. Doing this would introduce another set of issues with players never getting bumped up or down simply because there isn't a large enough sample.
Consider a 30 year old that lives in an area where they can't reasonably go play in another area's 18 & over league near by. Their team is in a subflight with 9 other teams, so 8 matches get played. They have a large roster, say 18 people, so every player can really only get 3-4 matches, and then with scheduling perhaps some only get 2 or 3. A self-rated player may never get computer rated if this is the only league that counts, and the other players won't see their rating move much at all unless there are huge wins/losses.
If their fall and winter and tri-level leagues count though, they probably get another 10-15 matches in and their rating can be based on a much more complete set of data.
What I'd like to see is something in the algorithm that looks for sandbagging results and omits them, or throw out the best/worst results at year-end so a few really good or bad results don't skew a rating one way or the other.