PHP Web Search

Free and open source PHP Web Search Scripts. Web Search scripts are aimed at adding an internal search functionality to your site.
  1. No Screenshot
    2089 total visits
    Comic Engine can also return the URLs of the comic images from content delivery network servers.Requirements: PHP 5.0 or higher
  2. No Screenshot
    1618 total visits
    Parameters for the search query can be defined, similar to the ones from the Google "Advanced Search" page. Google_Query connects to the Google site via HTTP and retrieves the contents of the results page.Google_Query also parses the results to obtain the number of occurrences found for the specified query.Requirements: - PHP 5.0 or higher - TCP socket connection support enabled
  3. No Screenshot
    1955 total visits
    Grab Emails From URL validates a given page URL and then retrieves the page contents for extracting any e-mail addresses found in it.A text file is used to store the e-mail addresses.Requirements: PHP 5.0 or higher
  4. No Screenshot
    1538 total visits
    The included base classes handle the retrieval and parsing of generic search engine result pages. There are also sub-classes specialized on performing search queries to Google, Lycos and Rambler.
  5. No Screenshot
    1692 total visits
    Meta tags fetcher is meant to retrieve a Web page of a given URL and retrieve the values of its HTML meta tags. It parses the retrived pages and extracts its title and the meta tag values for the keywords and description, if present.Meta tags fetcher provides a class that implements an abstract interface for accessing Web pages and several ...
  6. No Screenshot
    1869 total visits
    A page from a specified remote site is retrieved and parsed, in order to extract the URLs of all links and e-mail addresses it contains.Link And Domain Extract can also check if a link exists in the page with given URL, get the list of domains found in the page links, and check if there is a link in the ...
  7. No Screenshot
    2020 total visits
    The Google search result pages for the given keywords is retrieved. Google Crawler parses the result pages to retrieve the list of search result URLs, titles and excerpts from the pages that were found.The limit number of result pages to be retrieved is configurable.Requirements: PHP 4.0 or higher
  8. No Screenshot
    1833 total visits
    The recommended search text corrections, the number of results found, the result entries that match a given URL pattern, the title, summary etc., can be retrieved.
  9. No Screenshot
    2103 total visits
    The keywords are extracted from the HTTP referrer URL value, if it is a Google search results page.Requirements: PHP 4.0 or higher
  10. No Screenshot
    2083 total visits
    Web Crawler using MySQL DB retrieves a given Web page and parses its HTML content to extract the URLs of links and frames. The URLs that are crawled are stored in a MySQL database table if the URL was not yet stored previously.The list of URLs already stored from a specified domain name can also be displayed on a Web ...
  11. No Screenshot
    2131 total visits
    Spider Engine can retrieve one or more pages from Web sites. The URL of the pages may follow a numeric pattern. The HTML pages are parsed for configurable patterns.The data found in the pages is passed to a separate function for custom processing that can be implemented by a sub-class.Requirements:PHP 4.0 or higher
  12. No Screenshot
    2039 total visits
    Regular expressions are used to retrieve URLs and all links detected on a page are followed.
  13. No Screenshot
    1988 total visits
    Robots_txt takes the URL of a page and retrieves the robots.txt file of the same site. The robots.txt is parsed and the rules defined in it are looked up, in order to determine if crawling a page is allowed.Robots_txt also stores the time when a page is crawled to check whether next time another page of the same site is ...
  14. No Screenshot
    2314 total visits
    Spider Class can retrieve Web pages and parse them to extract the list of their links to continue crawling all linked pages.The pages may be retrieved iteratively until it is reached a given limit of pages or link depth.Spider Class is possible to set regular expressions for both link definitions and content matches, changeable at every depth.
  15. No Screenshot
    1949 total visits
    The setup is done through a configuration file containing user defined values.Key Features of Htdig site indexing and searching interface:- Setup a suitable configuration file from a few user defined parameters.- Index Web pages to build the search databases.- Search the indexed database to capture the matches into a PHP data structure ready to be used to display the results ...
Pages 3 of 5« 1 2 3 4 5 »