pFad - Phone/Frame/Anonymizer/Declutterfier! Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

URL: http://apify.com/dainty_screw/amazon-bestsellers-scraper

0vR3VoeDNNSHlqMl91eU9IV290YXpiaG9MN1dyOEh2RVk4YmpBb01SUGs4OC9yczpmaWxsOjMyOjMyL2NiOjEvYUhSMGNITTZMeTloY0dsbWVTMXBiV0ZuWlMxMWNHeHZZV1J6TFhCeWIyUXVjek11WVcxaGVtOXVZWGR6TG1OdmJTOXFiMlJCT0dSTFZFZEVWM2xCZG1SVU5DOW1XamRuUTI5NWFrUjNVSEJGWTJweWJTMXRlVUYyWVhSaGNpNXdibWMucG5n.webp"/>Amazon Bestsellers Scraper · Apify
Amazon Bestsellers Scraper avatar
Amazon Bestsellers Scraper

Pricing

$9.99/month + usage

Go to Apify Store
Amazon Bestsellers Scraper

Amazon Bestsellers Scraper

Developed by

codemaster devops

codemaster devops

Maintained by Community

Amazon Bestsellers Scraper is a powerful Apify actor designed to efficiently extract real-time data from Amazon's Best Sellers lists. This actor is tailored for businesses, researchers, and individuals who seek to monitor and analyze market trends, product rankings, and pricing dynamics

5.0 (1)

Pricing

$9.99/month + usage

8

75

8

Last modified

9 days ago

Amazon Best Sellers Scraper

Extract the top 100 best-selling products from any Amazon marketplace (.com, .co.uk, .de, .fr, .es, .it). This Amazon Best Sellers scraper collects item name, price, currency, number of offers, product URL, thumbnail, review metrics and the category hierarchy so you can analyse trends or feed the data into your own dashboards. Use it as a lightweight Amazon Best Sellers API alternative that returns clean JSON/CSV data on autopilot.

Why marketers and analysts love this Amazon Best Sellers scraper

  • Spot winning products fast – track the items that Amazon pushes to the top of each bestseller chart and react before competitors.
  • Monitor pricing and offers – export daily price, currency, and offer count data to understand how aggressive each category is.
  • Avoid manual copy/paste – automate repetitive research with a maintained actor rather than brittle spreadsheets and browser extensions.
  • Feed business intelligence tools – ship structured data straight into Google Sheets, Airtable, Power BI, Tableau, or your internal dashboards.
  • Ecommerce sellers researching private-label opportunities or gap analysis.
  • Agencies monitoring client categories to find influencer and advertising hooks.
  • Marketplaces comparing Amazon catalogues with their own inventory and pricing strategy.
  • Investors watching macro retail trends and building alerts for fast-moving products.
  • SEO specialists building “top products” content that updates programmatically.

How the scraper works

  1. Validate your input – the actor checks that every provided URL belongs to an Amazon Best Sellers page and normalises your per-category item limit.
  2. Queue categories – it opens the homepage (or your custom category URLs), follows the left-hand navigation tree, and enqueues categories up to the crawl depth you specify.
  3. Scrape detail pages – for each category it detects whether the page uses the new or legacy layout, scrolls the content, and extracts up to 100 items (or the limit you request) from page 1 and page 2.
  4. Store results – products are saved to the default dataset with a position field so you can keep the Best Seller ranking order.

The actor uses Apify’s Puppeteer infrastructure, automatic proxy rotation, session retirement, and CAPTCHA detection to stay reliable on a difficult target like Amazon.

Input parameters

FieldTypeRequiredDescription
proxyobjectyesProxy configuration. Using Apify Proxy with the RESIDENTIAL group is recommended.
domainstringnoThe Best Sellers homepage to start from. Defaults to https://www.amazon.com/Best-Sellers/zgbs. Ignored when categoryUrls are provided.
categoryUrlsarray of objectsnoSpecific Best Seller category URLs to enqueue. Leave empty to start from the homepage navigation tree.
depthOfCrawlintegernoHow deep to follow the category tree. Valid values are 1 (main categories only) or 2 (one more level). Default is 1.
maxItemsPerCategoryintegernoCaps the number of products stored per category. Set to 0 to skip saving items or omit the field to collect every product Amazon exposes. Default is 20.

Example input

{
"proxy": {
"useApifyProxy": true,
"apifyProxyGroups": ["RESIDENTIAL"],
"apifyProxyCountry": "US"
},
"categoryUrls": [
{
"url": "https://www.amazon.com/best-sellers-books-Amazon/zgbs/books/"
}
],
"depthOfCrawl": 1,
"maxItemsPerCategory": 20
}

This configuration scrapes only the Books bestseller list. Remove the categoryUrls array (or add more entries) when you want to crawl additional categories, and adjust maxItemsPerCategory for different per-category limits.

Running the scraper

From the Apify Console

  1. Open the actor and click Input.
  2. Press Restore default input to load the example above.
  3. Adjust maxItemsPerCategory, depthOfCrawl, or add categoryUrls if you need specific verticals.
  4. Click Save & Start. The run log will show which categories were enqueued and how many items were stored.

Via the Apify API

Use the Run Actor endpoint with the JSON body shown above. Example using curl:

curl "https://api.apify.com/v2/acts/<ACTOR_ID>/runs?token=<APIFY_TOKEN>" \
-H "Content-Type: application/json" \
-d '{
"proxy": {"useApifyProxy": true, "apifyProxyGroups": ["RESIDENTIAL"], "apifyProxyCountry": "US"},
"domain": "https://www.amazon.com/Best-Sellers/zgbs",
"depthOfCrawl": 1,
"maxItemsPerCategory": 20
}'

Locally with the Apify CLI

npm install
npx apify login
npx apify run --input-file=./input.json

input.json already contains the same configuration as the example above.

Output dataset

Each item in the default dataset contains the fields below:

FieldDescription
positionRanking on the Best Sellers page (starts at 1).
categoryCategory name taken from the page title.
categoryUrlURL of the category that was scraped.
nameProduct title.
priceNumeric price (best-effort parsing across currencies).
currencyCurrency symbol or ISO code derived from the price string.
numberOfOffersCount of seller offers if exposed by Amazon.
urlFully-qualified product URL.
thumbnailURL of the product thumbnail image.

A machine-readable version of this schema is available in OUTPUT_SCHEMA.json so you can copy it straight into the Apify console or your own tooling.

{
"position": 1,
"category": "Amazon Best Sellers: Best Books",
"categoryUrl": "https://www.amazon.com/best-sellers-books-Amazon/zgbs/books/",
"name": "Example Product",
"price": 19.99,
"currency": "$",
"numberOfOffers": 12,
"url": "https://www.amazon.com/dp/B000000",
"thumbnail": "https://images-na.ssl-images-amazon.com/images/I/example.jpg"
}

Results are typically ready within minutes. You can export the dataset as JSON, CSV, Excel, or connect it to Google Sheets via Apify Integrations.

Tips and limitations

  • depthOfCrawl supports values 1 and 2. Going deeper usually re-traverses the same tree while increasing the load on Amazon, so it is intentionally capped.
  • Amazon actively blocks automated traffic. Stick to Apify residential proxies, keep the default concurrency (10), and avoid running many parallel tasks if you see CAPTCHAs in the logs.
  • Set maxItemsPerCategory to a lower value (e.g., 10) for quick smoke tests or to reduce dataset size, and to 0 when you only want to queue deeper categories without saving products.
  • The actor captures up to two pages per category. If Amazon introduces more pages with content, increase maxItemsPerCategory to ensure they are fetched.

Visualize the results

  • Dataset preview: Every run produces a dataset with a built-in table and chart view—open the run in the Apify console and click Dataset to explore the results without exporting anything.
  • Shareable UI: Use Apify’s “Share dataset” feature to publish a public link, then embed it in your dashboard or documentation for a zero-code viewer.
  • Custom dashboards: Connect the dataset to Google Looker Studio, Power BI, or Tableau to build richer visuals. The dataset schema is stable, so you can schedule the actor and refresh the dashboard automatically.

Frequently asked questions

How often can I run the Amazon Best Sellers scraper?

As often as you like. Many users schedule the actor hourly or daily via Apify Tasks to maintain a live Amazon Best Sellers dataset for BI dashboards.

Can I scrape multiple Amazon marketplaces in the same run?

Yes. Add category URLs from any supported Amazon TLD to the categoryUrls array. The actor detects the correct locale automatically.

Does this replace the official Amazon API?

For bestseller research, yes. It is a no-code Amazon Best Sellers API alternative that outputs exactly the data you need without building or maintaining scrapers yourself.

Can I enrich the results with review counts or ratings?

The actor currently focuses on bestseller ranking data. You can post-process the collected product URLs with another Apify actor (for example, an Amazon product details scraper) to append reviews and ratings.

Troubleshooting

SymptomSuggested fix
Run stops after the homepageVerify that the start URL contains bestsellers or Best-Sellers. The actor retires sessions if navigation is blocked; rerun or try different proxies.
Empty datasetCheck the logs for No category links found. Amazon occasionally tests new layouts—open the page in a browser to confirm the selector has not changed, then let us know.
Prices missing or nullSome top items are promotional bundles without clear prices. This is normal.
HTTP 429 / CAPTCHA warningsLeave the default proxy configuration, reduce concurrent runs, or enable Apify’s automatic proxy rotation.

Always comply with Amazon’s Terms of Service and data protection laws (GDPR, CCPA, etc.). For a deep dive into the legal aspects of scraping, read Is Web Scraping Legal?.

Stay in touch

Support

Harness Amazon’s bestseller data to discover trends, benchmark competitors, and make faster product decisions. Start your first Amazon Best Sellers scrape today and turn marketplace insights into growth.

pFad - Phonifier reborn

Pfad - The Proxy pFad © 2024 Your Company Name. All rights reserved.





Check this box to remove all script contents from the fetched content.



Check this box to remove all images from the fetched content.


Check this box to remove all CSS styles from the fetched content.


Check this box to keep images inefficiently compressed and original size.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy