An Archive.org Equivalent without Robots.txt rules?

Discussion in 'Internet and Technology' started by cheat_master30, Sep 10, 2016.

  1. cheat_master30

    cheat_master30 Moderator

    3,805
    1,052
    +1,079
    Is there one?

    Because at the moment, it seems like pages are made inaccessible at random because some idiotic domain squatter buys them and decides they need a robots.txt page, which the archive apparently likes to honour retrospectively.

    So is there an equivalent where it treats domain owners as seperate people? And only removes pages if the original site owner puts up a robots.txt page rather than any old domain squatter?
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.