Question for computer geeks

Discussion in 'Odds & Ends' started by McLovin, Oct 30, 2013.

  1. McLovin

    McLovin Hall of Fame

    Joined:
    Jun 2, 2007
    Messages:
    3,184
    This came up at work today, and while I had the answer to the problem, I did not have the answer to the follow-up question 'why that number?'.

    Anyone who has ever setup a server, whether it be complex network, an HTTP server (e.g., Apache, Tomcat, etc.), or a simple Unix file server knows that ports 0 -> 1024 are 'restricted', meaning you must have root/administrator access to create a listener on one of those ports.

    My question is...why 1024? Why not 1000 or 2112? 1024 is 2^10, so it's not as if they were trying to save space by limiting it to a byte boundary.

    Note, that I know why there are restricted ports (it's a historical trust/security issue), I'm just wondering what significance 1024 has.

    Anyone?

    (NOTE: I realize this would be better asked on a computer forum, but I'm too lazy to create an account, and I know there are some geeks here because I've read some of you posts :) )
     
    Last edited: Oct 30, 2013
    #1
  2. Chico

    Chico Banned

    Joined:
    Jun 29, 2013
    Messages:
    9,197
    2^10 = 1024. It is a power of 2.

    Computers works with binary numbers (i.e. one bit can only be 0 or 1) so it is natural to use numbers that are power of 2.
    The 1024 happens to be the power of two that is closest to 1000.

    That is why 1KB = 1024 B and 1 MB = 1024 KB, ...
     
    #2
  3. McLovin

    McLovin Hall of Fame

    Joined:
    Jun 2, 2007
    Messages:
    3,184
    Yes, I understand 1KB = 1024, but why not 512 or 2048? Maybe there is no answer (it was arbitrary), or maybe its something like the old DOS 640K limit, which was restricted by the IBM PC chipset, and I'm fine w/ that.

    I'm just trying to see if anyone has a solid reference for why they chose '1024'.
     
    #3
  4. Li Ching Yuen

    Li Ching Yuen Legend

    Joined:
    Mar 7, 2010
    Messages:
    6,989
    The physical limitation is 65535. Obviously the Unix guys knew they had to have a class for special processes so they enforced a few classes (I think there are three). The benefits are obvious.

    The number of widely used services at the time was rightly approximated by 2^10. Anything less was not enough, even though not all ports in that range have a match right now.
     
    #4
  5. fantom

    fantom Hall of Fame

    Joined:
    Feb 24, 2004
    Messages:
    1,542
    I don't have the specific answer you're seeking, but it most likely wasn't just arbitrary. Engineers make these decisions all the time based on current and future needs of their systems. At the time, 1024 probably left adequate overhead for future expansion.

    I've been designing digital computing systems for about 13 years now, FWIW.
     
    #5
  6. MauricioDias

    MauricioDias Rookie

    Joined:
    Oct 28, 2013
    Messages:
    303
    Location:
    São Carlos - SP
    Suppose you're exchanging data with a computer on a port <1024, and you know that computer is running some variant of unix. Then you know that the service running on that port is approved by the system administrator: it's running as root, or at least had to be started as root.

    On the wide, wild world of the Internet, this doesn't matter. Most servers are administered by the same people as the services running on them; you wouldn't trust the roots more than the other users.

    With multiuser machines, especially on a local network, this can matter. For example, in the days before civilian cryptography, a popular method of running shell commands on another machine was rsh (remote shell); you could use password authentication, or you could authenticate just by proving you were user X on machine A (with machine B knowing that X@A could log in as X@B with no password). How to prove that? The rsh client is setuid root, and uses a port number <1024, so the server knows that the client it's talking to is trustworthy and won't lie as to which user on A is invoking it. Similarly NFS was designed to be transparent with respect to users and permissions, so a common configuration was that on a local network every machine used the same user database, and user N at A mounting filesystems from server B would get the permissions of user N at B. Again, the fact that the NFS client is coming from a port number <1024 proves that root at A has vetted the NFS client, which is supposed to make sure that if it transmits a request purporting to be from user N then that request really is from user N.

    Unauthorized users not being able to run servers on low ports is another benefit, but not the main one. Back in the days, spoofing was quite the novelty and users running spoof servers would be quickly quashed by vigilant administrators anyway.

    The limit was designed because it convered a sufficient number of services till today.
     
    #6
  7. bad_call

    bad_call Legend

    Joined:
    Aug 23, 2006
    Messages:
    5,437
    of same opinion... :)
     
    #7
  8. McLovin

    McLovin Hall of Fame

    Joined:
    Jun 2, 2007
    Messages:
    3,184
    *sigh* That was very well written, and I thank you for that, except...
     
    #8
  9. McLovin

    McLovin Hall of Fame

    Joined:
    Jun 2, 2007
    Messages:
    3,184
    Yeah, I'm starting to think this was just a number they picked that sounded good at the time, and as luck would have it, happened to be a good estimate.
     
    #9

Share This Page