Great, robots.txt parsing is now working correctly!
Things done for my search engine project today:
- Improved duplicate indexing prevention both in terms of history size and memory use, through the use of a bloom filter
- Added robots.txt support
- Improved performance measuring
- Added detection of corporate websites
- Added language detection and stemming of varying quality for some 25 languages - please let me know if you know of any good language-specific stemmers!
At first glance, I seem to be getting a roughly 4% robots.txt rejection rate on a pile of personal websites, which is honestly lower than I had expected