You could make it harder to pass around the URL if it is dynamic. That is make the url session related.
You can do a search on "uncrawlable" and then exactly the opposite of what they suggest. That is most people want to be crawled, so their advice is backwards.
One thing to watch out for is Google dorking. Even if Google doesn't crawl, the html/javascript/etc leave clues to file locations.
When I am on a server I will copy what is needed to block wget, curl, nutch, screamingfrog, etc. I have a number of Nginx maps I use to block and flag troublemakers.
Original Message
From: abbot@monksofcool.net
Sent: March 11, 2020 3:57 PM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Prevent direct access to files but allow download from site
* MAXMAXarena:
> I want to be able to download a file from the site's html tag [...]
> But do not allow direct access and download, using the browser or
> other tools such as curl or wget.
Public access and restricted access are mutually exclusive. It also
makes nearly no difference what utility is used to access a URL pointing
to the text file you gave as an example, because they'll all send a HTTP
GET request, which is what the web server expects. Attempting to limit
access based on a User-Agent header or similar, to identify the client
type, is easily circumvented.
-Ralph
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx