I'm trying to set up my robots.txt file such that when coming from certain third-level domains they will be shown a no-crawl robots.txt. I've been using the advice in this post, http://community.webfaction.com/questions/3649/separate-robotstxt-for-different-subdomains-that-point-to-the-same-application, but I can't get it to work with my site. I'm using wsgi to serve django and I put the following in my httpd.conf file.
This results in the server eventually timing out with a 502 error. asked 23 Sep '13, 14:28 theUNCHARTED |
Those directives should work. I would write them like this,
There might be a minor syntax error and debugging in real-time would be easier. You may submit a support ticket if you would like us to do this. Be sure to use curl to test the domain, as it does not cache or forward the result, but show the raw request which is useful for debugging as most browsers cache,
answered 23 Sep '13, 18:26 johns |