The Tool, on 22 July 2011 - 08:06 AM, said:
Not really. No offense to cscartrocks but like I said previously in the thread...If you are using the SEO addon and you simply add "Disallow: /index.php" to the robots.txt file, you will not incur canonical URL's.
Actually, Google itself recommends using canonical urls rather than using the robots.txt file:
"What about the robots.txt file?
One item which is missing from this list is disallowing crawling of duplicate content with your robots.txt file. We now recommend not blocking access to duplicate content on your website, whether with a robots.txt file or other methods. Instead, use the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. If access to duplicate content is entirely blocked, search engines effectively have to treat those URLs as separate, unique pages since they cannot know that they're actually just different URLs for the same content. A better solution is to allow them to be crawled, but clearly mark them as duplicate using one of our recommended methods. If you allow us to crawl these URLs, Googlebot will learn rules to identify duplicates just by looking at the URL and should largely avoid unnecessary recrawls in any case. In cases where duplicate content still leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools."