Looking for Web scraping technology?

Easy build parsers for web sites and APIs, manage scraping schedule and store data in wide range of storages.

When you need automatic web parser (scraper)?

Large amount of data

In such "classical" case manual processing requires a lot of time and money. Also, the human factor (fatigue) reduces the relevance and accuracy of the result

Small amount of relevant data

With a large initial array of data and a small amount of relevant information, with manual processing most of the time is spent on "information noise" filtering, which can be performed automatically and without large financial spend

Frequent source updates rate

When target data source update frequently manual processing requires a lot of time and money also manual extraction increases the response time from the appearance of data

Onlizer HTML Parser features for data scraping and processing

We forged Onlizer HTML Parser using our 6+ years of experience and already approved it with numerous implemented projects

Its visual and without requirement to code

We created tool to build parsing process using visual constructor and step-by-step scenarios for navigations, data extraction and storing.

Support for wide range of formats and technologies

The platform provides tools for working with HTML, JSON, JavaScript, XML, TXT, emails, web forms, web requests, etc.

Batch data processing and high scale

The components of the platform are designed to support batch processing of data, as well as the most effective work with a large amount of requests.

Integration of a wide range of data storages

Onlizer provides big set of connector modules for various types of storages: MySQL, Microsoft SQL Server, Oracle, Google Spreadsheets, Excel, MongoDb, Redis, Amazon S3, Redshift, Azure Table Storage, FTP, with strict data schemes, schema-less, popular enterprise solutions and exotic cases.

Pricing optimization

Onlizer pricing build in a way to provide best price per unit. Our HTML Parser optimize navigation and data scraping scenarios, also we use most efficient technologies to extract data. Our client pays for the resources that are actually used and does not bear the costs of supporting a large infrastructure

Extraction and processing algorithms can be flexibly changed “on-air”

Data extraction scenarios can be easily modified "on the fly" to adapt to data sources changes or cover new requirements without large time delays to rewrite codes or scripts.

Great team experience in the data extraction and processing scenarios

We developing solutions for data scraping and processing 6+ years and have huge expertise in all aspects related to this tasks.

Start your journey in world of limitless automation for free

You can test all features for free during 30-days trial period, no credit card required

Additional data scraping features

Data extraction

  • from HTML markup
  • by URL (with customizable request settings)
  • from in-document JS scripts
  • from JSON

Parsing scenarios

  • Single-page parsing
  • Multi-page (batch) parsing
  • Multi-page single- or multiple- models parsing
  • Values lists extraction

Values formatting

  • Values extraction by patterns (RegEx supported)
  • Data format converters
  • Symbols clearing/replacing
  • Nested arrays and objects extraction
  • Data templating, formatting and transformation
 

Our cases

Discounts aggregator for online stores

Customer: International goods delivery company

Goal: Aggregate good's discounts from 50+ online stores on weekly basis. Overall positions couunt 500 000+.

Solution: Developed scrapers for all stores to parse goods with discounts including products names, description, sizes and colors, regilar and discounted prices, images. Data stored into Microsoft SQL Server and integrated into company website.

Price per unit: $0.0003 per product ($3 per 10 000 positions)