The Fastest Way to Take Website Screenshots in Code
The fastest way to take a website screenshot programmatically is with a screenshot API. Three lines of code, no browser installation, no dependencies. SnapRender's free tier gives you 500 screenshots per month with no credit card required. Sign up, grab your API key, and paste the code below. You'll have a working screenshot in under two minutes.
The Shortest Path: SnapRender SDK
Node.js (3 lines)
const { SnapRender } = require("snaprender");
const client = new SnapRender("YOUR_API_KEY");
const screenshot = await client.screenshot("https://example.com");
Install with npm install snaprender. That's it. The screenshot variable contains the image buffer. Write it to a file, send it in an HTTP response, upload it to S3, whatever you need.
Python (3 lines)
from snaprender import SnapRender
client = SnapRender("YOUR_API_KEY")
screenshot = client.screenshot("https://example.com")
Install with pip install snaprender. Same idea. The SDK handles the HTTP request, error handling, and response parsing. For a full walkthrough of the Python SDK, see How to Screenshot a Website with Python.
cURL (1 line)
curl -G "https://app.snap-render.com/v1/screenshot" \
-H "X-API-Key: YOUR_API_KEY" \
-d "url=https://example.com" \
--output screenshot.png
No SDK needed. Works from any terminal. Good for testing, shell scripts, or quick one-offs. For more cURL examples and automation patterns, check out Automate Screenshots with cURL.
Raw HTTP (Any Language)
If you'd rather not install an SDK, SnapRender's API is a single GET request. Here's the pattern in any language that can make HTTP calls:
Node.js with fetch:
const response = await fetch(
"https://app.snap-render.com/v1/screenshot?url=https://example.com&format=png",
{ headers: { "X-API-Key": "YOUR_API_KEY" } }
);
const buffer = await response.arrayBuffer();
Python with requests:
import requests
response = requests.get(
"https://app.snap-render.com/v1/screenshot",
headers={"X-API-Key": "YOUR_API_KEY"},
params={"url": "https://example.com", "format": "png"},
)
with open("screenshot.png", "wb") as f:
f.write(response.content)
Go:
req, _ := http.NewRequest("GET", "https://app.snap-render.com/v1/screenshot?url=https://example.com", nil)
req.Header.Set("X-API-Key", "YOUR_API_KEY")
resp, _ := http.DefaultClient.Do(req)
PHP:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://app.snap-render.com/v1/screenshot?url=" . urlencode("https://example.com"));
curl_setopt($ch, CURLOPT_HTTPHEADER, ["X-API-Key: YOUR_API_KEY"]);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$screenshot = curl_exec($ch);
file_put_contents("screenshot.png", $screenshot);
One endpoint. One header for authentication. URL parameters for customization. The response body is the image.
Now Compare: Puppeteer
Here's the Puppeteer equivalent to take the same screenshot programmatically:
const puppeteer = require("puppeteer");
async function takeScreenshot(url) {
const browser = await puppeteer.launch({
args: ["--no-sandbox", "--disable-setuid-sandbox"],
});
try {
const page = await browser.newPage();
await page.setViewport({ width: 1280, height: 720 });
await page.goto(url, {
waitUntil: "networkidle0",
timeout: 30000,
});
const screenshot = await page.screenshot({ type: "png" });
return screenshot;
} finally {
await browser.close();
}
}
takeScreenshot("https://example.com").then((buf) => {
require("fs").writeFileSync("screenshot.png", buf);
});
That's 20 lines. And before running this, you need:
npm install puppeteer(downloads ~170MB Chromium binary)- On Linux: install system dependencies (
apt-get install chromium-browseror a long list of shared libraries) - Enough RAM to run Chrome (200-300MB per instance)
- Patience for the 3-10 second startup time per browser launch
The Puppeteer approach isn't bad. It's a solid tool. But if your goal is "take a screenshot as fast as possible," it's objectively slower to set up and more code to write.
Time Comparison
| Step | SnapRender | Puppeteer |
|---|---|---|
| Create account / install | 1 minute (sign up) | 3-5 minutes (npm install + system deps) |
| Write capture code | 30 seconds (3 lines) | 5-10 minutes (20 lines + error handling) |
| First successful screenshot | Under 2 minutes | 15-30 minutes |
| Debug first issue | Unlikely (it just works) | Very likely (Chrome args, sandbox, fonts) |
This isn't about Puppeteer being bad. It's about the amount of friction between you and a working screenshot. SnapRender removes almost all of it.
Common Customizations
The basic screenshot call works, but most real use cases need tweaks. Here's how to handle the common ones with SnapRender.
Full-Page Screenshot
Captures the entire scrollable page, not just the viewport. SnapRender supports up to 32,768 pixels in height.
const { SnapRender } = require("snaprender");
const client = new SnapRender("YOUR_API_KEY");
const screenshot = await client.screenshot("https://example.com", {
full_page: true,
});
from snaprender import SnapRender
client = SnapRender("YOUR_API_KEY")
screenshot = client.screenshot("https://example.com", full_page=True)
Mobile Viewport
Render the page as it would appear on a phone. For more on mobile captures, see Mobile Screenshot API.
const screenshot = await client.screenshot("https://example.com", {
width: 390,
height: 844,
device_scale_factor: 3,
});
screenshot = client.screenshot("https://example.com",
width=390,
height=844,
device_scale_factor=3,
)
Width of 390 with a 3x scale factor matches an iPhone 14 Pro. SnapRender supports viewport widths from 320 to 3,840 pixels.
Dark Mode
Force the page into dark mode, useful for previews and thumbnails.
const screenshot = await client.screenshot("https://example.com", {
dark_mode: true,
});
screenshot = client.screenshot("https://example.com", dark_mode=True)
This sets prefers-color-scheme: dark at the browser level, so sites with dark mode CSS will render accordingly.
PDF Output
Get a PDF instead of an image. Same API, different format parameter.
const pdf = await client.screenshot("https://example.com", {
format: "pdf",
});
pdf = client.screenshot("https://example.com", format="pdf")
Block Ads and Cookie Banners
Clean screenshots without visual noise. For more on this, see How to Block Cookie Banners in Screenshots.
const screenshot = await client.screenshot("https://example.com", {
block_ads: true,
no_cookie_banners: true,
});
screenshot = client.screenshot("https://example.com",
block_ads=True,
no_cookie_banners=True,
)
Hide Specific Elements
Remove specific page elements by CSS selector before capture. Useful for removing headers, footers, floating chat widgets, or anything else that clutters the screenshot.
const screenshot = await client.screenshot("https://example.com", {
hide_selectors: ["#cookie-popup", ".floating-chat", "header"],
});
screenshot = client.screenshot("https://example.com",
hide_selectors=["#cookie-popup", ".floating-chat", "header"],
)
HTML to Image
You can also render raw HTML instead of a URL. Pass an HTML string and get back a PNG, JPEG, or WebP. See the HTML to Image guide for details.
Output Format
SnapRender returns PNG by default. You can also request JPEG (smaller files), WebP (even smaller), or PDF.
const webp = await client.screenshot("https://example.com", {
format: "webp",
});
Caching
SnapRender caches screenshots automatically. Fresh captures take 2-5 seconds. Cached responses return in under 200ms. You can set a custom cache TTL:
const screenshot = await client.screenshot("https://example.com", {
cache_ttl: 86400, // 24 hours
});
Putting It Together: A Real Example
Here's a complete, copy-paste-ready script that captures screenshots of multiple URLs and saves them to disk.
Node.js
const { SnapRender } = require("snaprender");
const fs = require("fs");
const client = new SnapRender("YOUR_API_KEY");
const urls = [
"https://github.com",
"https://news.ycombinator.com",
"https://stackoverflow.com",
];
async function captureAll() {
for (const url of urls) {
const filename = new URL(url).hostname.replace(/\./g, "-") + ".png";
const screenshot = await client.screenshot(url, {
width: 1280,
height: 720,
format: "png",
block_ads: true,
no_cookie_banners: true,
});
fs.writeFileSync(filename, screenshot);
console.log(`Saved ${filename}`);
}
}
captureAll();
Python
from snaprender import SnapRender
from urllib.parse import urlparse
client = SnapRender("YOUR_API_KEY")
urls = [
"https://github.com",
"https://news.ycombinator.com",
"https://stackoverflow.com",
]
for url in urls:
filename = urlparse(url).hostname.replace(".", "-") + ".png"
screenshot = client.screenshot(url,
width=1280,
height=720,
format="png",
block_ads=True,
no_cookie_banners=True,
)
with open(filename, "wb") as f:
f.write(screenshot)
print(f"Saved {filename}")
Run either script and you'll have three screenshots on disk in about 10 seconds.
SnapRender Pricing
No feature gating. Every plan includes every feature. The only difference is volume.
| Plan | Price | Screenshots/month |
|---|---|---|
| Free | $0 | 500 |
| Starter | $9 | 2,000 |
| Growth | $29 | 10,000 |
| Business | $79 | 50,000 |
| Scale | $199 | 200,000 |
The free plan requires no credit card. Sign up at snap-render.com, get your API key from the dashboard, and start capturing. Five hundred screenshots a month is enough for most side projects, prototypes, and internal tools.
Getting Started
If you want to take a website screenshot programmatically with the least friction possible:
- Sign up at snap-render.com (30 seconds, no credit card)
- Copy your API key from the dashboard
- Install the SDK:
npm install snaprenderorpip install snaprender - Paste the 3-line code snippet from above
- Run it
You'll have a working screenshot before you finish reading this sentence. The SnapRender SDK handles authentication, request formatting, error handling, and response parsing. All you provide is a URL and your API key.
For anything beyond the basics (full-page capture, mobile viewports, dark mode, PDF output, custom selectors), add the options shown in the customization examples above. Every parameter works on every plan, including free. For the full API reference, see the Screenshot API Complete Guide.