Community & Support
Learn
Marketplace
Discussions
Categories
Discussions
General
Platform
Academic
Partner
Regional
User Groups
Documentation
Events
Altair Exchange
Share or Download Projects
Resources
News & Instructions
Programs
YouTube
Employee Resources
This tab can be seen by employees only. Please do not share these resources externally.
Groups
Join a User Group
Support
Altair RISE
A program to recognize and reward our most engaged community members
Nominate Yourself Now!
Home
Discussions
Community Q&A
Crawl web operator does not return any results
mertcatar
Hi, i have a problem with Crawl web operator that it doesn't return any result i tryied latest rapidminer and then set up 7.1 but result didn't change and empty result page even I didn't try https url i tried. Could you help me please, where am i making wrong here ?
Find more posts tagged with
AI Studio
Web Mining
Extensions
Text Mining + NLP
Results View
Web Apps
Accepted answers
kayman
You didn't apply any crawling rules so basically your operator is just doing nothing even if it reads the content. If you do not state which patterns to follow and which of these to store the system just crawls clueless.
Try the get page operator first and see if you get any result then. This loads the actual url and returns the data. This way you can already validate the connection.
Next define which pages to crawl and store (patterns) with the crawl web operator, or use the get pages and provide a list of url's.
Typically that's a bit more trustworthy as webstructures can be pretty complex for 'blind' crawling, and quite some sites will just kick you of their server if you do this too obviously
All comments
kayman
You didn't apply any crawling rules so basically your operator is just doing nothing even if it reads the content. If you do not state which patterns to follow and which of these to store the system just crawls clueless.
Try the get page operator first and see if you get any result then. This loads the actual url and returns the data. This way you can already validate the connection.
Next define which pages to crawl and store (patterns) with the crawl web operator, or use the get pages and provide a list of url's.
Typically that's a bit more trustworthy as webstructures can be pretty complex for 'blind' crawling, and quite some sites will just kick you of their server if you do this too obviously
Quick Links
All Categories
Recent Discussions
Activity
Unanswered
日本語 (Japanese)
한국어(Korean)
Groups