aquasecurity / cloudsploit Goto Github PK
View Code? Open in Web Editor NEWCloud Security Posture Management (CSPM)
Home Page: https://cloud.aquasec.com/signup
License: GNU General Public License v3.0
Cloud Security Posture Management (CSPM)
Home Page: https://cloud.aquasec.com/signup
License: GNU General Public License v3.0
This is even something I'd consider building myself / adding to the project. Even secure keys on AWS can be insecure and exposed on a public github repo (even in deleted commit histories). Any interest?
The repo currently has 32 unit tests, but travis is only running 12 of them. This means that work to ensure that things cannot easily break is not really being used. Fix the travis build so that the tests actually run there.
I tried to install this using npm install
npm install
npm http GET https://registry.npmjs.org/async
npm http GET https://registry.npmjs.org/aws-sdk
npm http 304 https://registry.npmjs.org/async
npm http 304 https://registry.npmjs.org/aws-sdk
npm http GET https://registry.npmjs.org/sax/1.1.5
npm http GET https://registry.npmjs.org/xmlbuilder/2.6.2
npm http GET https://registry.npmjs.org/xml2js/0.4.15
npm http 304 https://registry.npmjs.org/sax/1.1.5
npm http 304 https://registry.npmjs.org/xml2js/0.4.15
npm http 304 https://registry.npmjs.org/xmlbuilder/2.6.2
npm http GET https://registry.npmjs.org/lodash
npm http 304 https://registry.npmjs.org/lodash
[email protected] node_modules/async
[email protected] node_modules/aws-sdk
├── [email protected]
├── [email protected]
└── [email protected] ([email protected])
node index.js
module.js:340
throw err;
^
Error: Cannot find module '/home/aaaaa/scans/../../cloudsploit-secure/scan-test-credentials.json'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object. (/home/keerthi/scans/index.js:10:17)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
"s3:getBucketVersioning" permission denied errors are treated as "MFA Delete not enabled"
My account has a KMS key that the KMS Key Policy plugin reports as having 13 users/roles allowed to use it. They key in question seems to have 3 policies, one with one user, another with 6 users, and a third with the same 6 users but different actions/conditions. I'd add that up to 7 instead of 13.
The code does not seem to deduplicate for each policy.
Allow multiple profiles to be used using AWS's CLI option for profiles.
The test for Elasticsearch logging provides a different shape of data than the API actually provides. Here is a sample from one of the tests:
{
DomainStatus: {
DomainName: 'mydomain',
ARN: 'arn:1234',
LogPublishingOptions: {
Enabled: true,
CloudWatchLogsLogGroupArn: 'arn:1234'
}
}
}
But the log publishing options are distinguished by kind of log, like this (anonymized from "Live Run" Raw Response):
"LogPublishingOptions": {
"ES_APPLICATION_LOGS": {
"CloudWatchLogsLogGroupArn": "arn:aws:logs:<REGION>:<ACCOUNT>:log-group:<LOG_GROUP>:*",
"Enabled": true
},
"INDEX_SLOW_LOGS": {
"CloudWatchLogsLogGroupArn": "arn:aws:logs:<REGION>:<ACCOUNT>:log-group:<LOG_GROUP>:*",
"Enabled": true
},
"SEARCH_SLOW_LOGS": {
"CloudWatchLogsLogGroupArn": "arn:aws:logs:<REGION>:<ACCOUNT>:log-group:<LOG_GROUP>:*",
"Enabled": true
}
}
I expect this is causing every logging scan to fail for every user of CloudSploit with an ES resource.
I am trying to run this application using lambda function. My Lambda function API end points gives following output. Not sure what to do with this. Documentation on these things would be helpful.
{
"code":0,
"data":{
"plugins":[
{
"title":"CloudTrail Bucket Delete Policy",
"query":"cloudtrailBucketDelete",
"description":"Ensures CloudTrail logging bucket has a policy to prevent deletion of logs without an MFA token"
},
{
"title":"CloudTrail Enabled",
"query":"cloudtrailEnabled",
"description":"Ensures CloudTrail is enabled for all regions within an account"
},
{
"title":"Account Limits",
"query":"accountLimits",
"description":"Determine if the number of resources is close to the AWS per-account limit"
},
{
"title":"Security Groups",
"query":"securityGroups",
"description":"Determine if sensitive ports are open to all source addresses"
},
{
"title":"Certificate Expiry",
"query":"certificateExpiry",
"description":"Detect upcoming expiration of certificates used with ELBs"
},
{
"title":"Insecure Ciphers",
"query":"insecureCiphers",
"description":"Detect use of insecure ciphers on ELBs"
},
{
"title":"Password Policy",
"query":"passwordPolicy",
"description":"Ensures a strong password policy is setup for the account"
},
{
"title":"Root Account Security",
"query":"rootAccountSecurity",
"description":"Ensures a multi-factor authentication device is enabled for the root account and that no access keys are present"
},
{
"title":"Users MFA Enabled",
"query":"usersMfaEnabled",
"description":"Ensures a multi-factor authentication device is enabled for all users within the account"
},
{
"title":"Access Keys",
"query":"accessKeys",
"description":"Ensures access keys are properly rotated and audited"
},
{
"title":"Group Security",
"query":"groupSecurity",
"description":"Ensures groups contain users and policies"
},
{
"title":"Detect EC2 Classic",
"query":"detectClassic",
"description":"Ensures AWS VPC is being used instead of EC2 Classic"
},
{
"title":"S3 Buckets",
"query":"s3Buckets",
"description":"Ensures S3 buckets use proper policies and access controls"
},
{
"title":"Domain Security",
"query":"domainSecurity",
"description":"Ensures domains are properly configured in Route53"
},
{
"title":"Database Security",
"query":"databaseSecurity",
"description":"Ensures databases are properly configured in RDS"
}
]
}
}
As a user, viewing the output from a cloudsploit, I would like to be able to see if rules violate particular compliance standards, such as CIS, so that I can report compliance levels, focus on rules I care most about and ignore rules I care less about.
I'm creating this issue because I'm willing to implement annotating rules with compliance information, if that would desirable to the maintainers here. If that isn't of interest, then I would avoid this).
I see two ways to achieve this:
a. add new items in the "compliance" member
b. add IDs to rules (plugins) and and externalize the compliance information
My proposal is (b) because it would give a way for anyone to add compliance information, including industry/domain specific rules without modifying this repo. Then assuming (b)
Although I enabled the Encryption in firehose stream, scan still gives me failure in firehose "The Firehose delivery stream does not use a KMS key for SSE"
Can anyone help me in this?
A similar structure/scan for Azure would be good.
I am writing a new plugins ... in that I have to call describeAlarmsForMetric()
and I need to pass paramters
var par = {MetricName: p.metricTransformations[0].metricName, Namespace: p.metricTransformations[0].metricNamespace };
```
so I am writing `helpers.cache(cache, cloudwatchlog, 'describeAlarmsForMetric', function(err, data) {`
But I cannot pass the paramters into it. Can you please help me regarding this How can I pass parametrs into this method.
Thanks in advance for help
We are observing an issue where the SQS plugin returns UNKNOWN for queues that don't have server-side encryption enabled, even when performing the "Cross Account Access" check.
I suspect it is because the KmsMasterKeyId
attribute of the getQueueAttributes
API call is not allowed on queues without SSE. Using the aws cli, for example, I get error:
An error occurred (InvalidAttributeName) when calling the GetQueueAttributes operation: Unknown Attribute KmsMasterKeyId
Missing comma in line 26 for the custom policy: https://github.com/cloudsploit/scans#inline-policy-not-recommended
In package.json the license is 'ISC'. The LICENSE file is GPLv3. Which one is being used for the project?
"AuditRole" : {
"Type": "AWS::IAM::Role",
"Properties": {
"Path": "/",
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::057012691312:root" },
"Action": "sts:AssumeRole"
}
]
},
"ManagedPolicyArns" : ["arn:aws:iam::aws:policy/SecurityAudit"]
}
}
As a contributor, I'd like to make it as easy as possible to review and have my contributions approved. In making that possible, I'd like to propose adding code style checks as a part of the build.
I'm happy to adopt the style that already exists in the repository, so all I'm asking here is whether such a change would be of interest if I do all of the work.
I have been using the older version (last Dec)without and issue for a while. Did a pull to update, and now getting the following errors:
cloudsploit/scans/plugins/cloudtrail/cloudtrailBucketAccessLogging.js:66
callback(null, results, source);
^
TypeError: callback is not a function
at /Users/philcox/src/cloudsploit/scans/plugins/cloudtrail/cloudtrailBucketAccessLogging.js:66:4
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:365:16
at replenish (/Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:831:29)
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:842:29
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:804:16
at /Users/philcox/src/cloudsploit/scans/plugins/cloudtrail/cloudtrailBucketAccessLogging.js:22:32
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:3339:20
at replenish (/Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:836:21)
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:842:29
at /Users/philcox/src/cloudsploit/scans/node_modules/async/dist/async.js:804:16
Any ideas?
For Ref:
24e0236 refs/remotes/origin/ap-south-1
525a34e refs/remotes/origin/master
1e0d307 refs/remotes/origin/plugins-to-tests
This scan that checks if AWS Config is recording and properly delivering will deliver the warning message Config Service is configured, and recording, but not delivering properly
if the lastStatus message returns "Pending". This seems to be a timing issue where it catches it when it is pending delivery.
According to AWS, the ConfigurationRecordersStatus will return "Pending" when the recorder has started and will return "Success" or "Failed".
We are getting notified when this scan returns "WARN" because it returned a "Pending" status.
Can logic be added to this scan to account for this status in case the scan happens to run when the recorder is currently running?
When running cloudsploit we are only getting the first 10 SSM parameters per region.
aws --version
aws-cli/1.11.133 Python/2.7.5 Linux/3.10.0-693.11.6.el7.x86_64 botocore/1.6.0
pip freeze
Arpeggio==1.9.0
attrs==18.2.0
awscli==1.11.133
AWSScout2==3.2.1
Babel==0.9.6
backports.ssl-match-hostname==3.5.0.1
boto==2.45.0
boto3==1.4.6
botocore==1.6.0
certifi==2018.8.24
chardet==2.2.1
cloud-init==0.7.9
colorama==0.3.2
configobj==4.7.2
cov-core==1.15.0
coverage==3.6b3
decorator==3.4.0
docutils==0.11
enum34==1.1.6
ethtool==0.8
futures==3.0.5
iampoliciesgonewild==1.0.6.2
iniparse==0.4
invoke==1.2.0
ipaddress==1.0.16
IPy==0.75
Jinja2==2.7.2
jmespath==0.9.0
jsonpatch==1.2
jsonpointer==1.9
kitchen==1.1.1
lockfile==0.9.1
lxml==3.2.1
M2Crypto==0.21.1
Magic-file-extensions==0.2
MarkupSafe==0.11
netaddr==0.7.5
nose==1.3.7
nose2==0.6.5
opinel==3.3.4
parver==0.1.1
pciutils==1.7.3
perf==0.1
Pillow==2.0.0
pipenv==2018.7.1
policycoreutils-default-encoding==0.1
prettytable==0.7.2
pyasn1==0.1.9
pycurl==7.19.0
pygobject==3.22.0
pygpgme==0.3
pyinotify==0.9.4
pyjq==2.2.0
pyliblzma==0.5.3
pyOpenSSL==0.13.1
pyserial==2.6
pystache==0.5.3
python-daemon==1.6
python-dateutil==1.5
python-dmidecode==3.10.13
python-linux-procfs==0.4.9
pyudev==0.15
pyxattr==0.5.1
PyYAML==4.1
requests==2.6.0
rhnlib==2.5.65
rsa==3.4.1
s3transfer==0.1.10
schedutils==0.4
seobject==0.1
sepolicy==1.1
setuptools-scm==3.1.0
simplejson==3.10.0
six==1.11.0
subscription-manager==1.20.11
typing==3.6.6
uritools==2.2.0
urlgrabber==3.10
urllib3==1.10.2
virtualenv==16.1.0.dev0
virtualenv-clone==0.3.0
yum-metadata-parser==1.1.4
L34-58 treat data.PasswordPolicy.ExpirePasswords as an integer when it is actually a boolean.
Here's what I see in my account using the AWS CLI:
[cchalfant:~]$ aws iam get-account-password-policy
{
"PasswordPolicy": {
"AllowUsersToChangePassword": true,
"RequireLowercaseCharacters": true,
"RequireUppercaseCharacters": true,
"MinimumPasswordLength": 10,
"RequireNumbers": true,
"PasswordReusePrevention": 24,
"HardExpiry": false,
"RequireSymbols": true,
"MaxPasswordAge": 90,
"ExpirePasswords": true
}
}
But 1.11 fails in our account with the following output:
IAM Password Expiration N/A global FAIL Password expiration of: true days is less than 90
Relevant code snippet:
if (!data.PasswordPolicy.ExpirePasswords) {
results.push({
status: 2,
message: 'Password expiration policy is not set to expire passwords',
region: 'global'
});
} else if (data.PasswordPolicy.ExpirePasswords < 90) {
results.push({
status: 2,
message: 'Password expiration of: ' + data.PasswordPolicy.ExpirePasswords + ' days is less than 90',
region: 'global'
});
} else if (data.PasswordPolicy.ExpirePasswords < 24) {
results.push({
status: 1,
message: 'Password expiration of: ' + data.PasswordPolicy.ExpirePasswords + ' days is less than 180',
region: 'global'
});
} else {
results.push({
status: 0,
message: 'Password expiration of: ' + data.PasswordPolicy.ExpirePasswords + ' passwords is suitable',
region: 'global'
});
}
it should be :
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudtrail:DescribeTrails",
"cloudfront:ListDistributions",
"s3:GetBucketVersioning",
"s3:ListAllMyBuckets",
"s3:GetBucketAcl",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeInstances",
"ec2:DescribeSecurityGroups",
"iam:ListServerCertificates",
"iam:GenerateCredentialReport",
"iam:GetCredentialReport",
"iam:GetAccountPasswordPolicy",
"iam:GetAccountSummary",
"iam:GetAccessKeyLastUsed",
"iam:GetGroup",
"iam:ListMFADevices",
"iam:ListUsers",
"iam:ListGroups",
"iam:ListAccessKeys",
"iam:ListVirtualMFADevices"
"iam:ListSSHPublicKeys",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeLoadBalancers",
"route53domains:ListDomains",
"rds:DescribeDBInstances",
"kms:ListKeys"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Error in here:
"iam:ListAccessKeys",
"iam:ListVirtualMFADevices"
"iam:ListSSHPublicKeys",
I've created a Docker image builder, using Packer, along with a Gradle build runner to parallelize. It creates Docker images for: CentOS 7.6, RHEL 7.6, Ubuntu 18.04, Ubuntu 16.04.
https://github.com/apolloclark/packer-cloudsploit
It runs great, takes only a couple of minutes to build all 5 images.
Not sure how https://github.com/cloudsploit/scans/blob/0d052daf9d8e141328027506322b7528de8d020f/helpers/oracle/index.js#L8
logger.js file will be fetched.
In CloudFront scans, a scan is configured to WARN when a CloudFront distribution is available only over HTTPS:
else if (Distribution.DefaultCacheBehavior.ViewerProtocolPolicy == 'https-only'){
helpers.addResult(results, 1, 'The CloudFront ' +
'distribution is set to use HTTPS but not to' +
'redirect users accessing the endpoint over HTTP', 'global',
Distribution.ARN)
}
This contradicts the "More Info" of the scan:
For maximum security, CloudFront distributions can be configured to only accept HTTPS connections or to redirect HTTP connections to HTTPS.
These are presented as equal options, and HTTPS-only is even mentioned first. Additionally, this contradicts the spirit of the compliance section:
HIPAA requires all data to be transmitted over secure channels. CloudFront HTTPS redirection should be used to ensure site visitors are always connecting over a secure channel.
HTTPS-only also ensures site visitors are always connecting over a secure channel.
Also, there's a small typo in the scan that results in the WARN description rendering as "but not toredirect users".
According to the AWS docs:
Yes, you can create a snapshot of a PostgreSQL Read Replica, but you cannot enable automatic backups
In the rdsAutomatedBackups plugin, we should skip such instances.
This can be done by checking for the presence of ReadReplicaSourceDBInstanceIdentifier
in the RDS describeDBInstances
call.
We would like some plugins related to Shield Advanced.
Are there any plans to scan through Lambda function meta data?
Currently, the "S3 Bucket All Users Policy" plugin relies on S3 ACLs. However, AWS also uses IAM policies for buckets. Both should be used to determine the final result.
I implemented this in a fork of this repo. Not sure if this is the best way but it's working for me.
var async = require('async');
var plugins = require('./exports.js');
var collector = require('./collect.js');
var commandLineArgs = require('command-line-args');
var csvWriter = require('csv-write-stream');
var fs = require('fs');
//define the command-line args we will support
const optionDefinitions = [
{ name: 'export', type: String }
]
//parse the command-line args - available in options
const options = commandLineArgs(optionDefinitions)
var exportCSV = false;
if (options.export) {
if (options.export.toLowerCase() === 'csv') {
exportCSV = true;
}
}
var AWSConfig;
// OPTION 1: Configure AWS credentials through hard-coded key and secret
// AWSConfig = {
// accessKeyId: '',
// secretAccessKey: '',
// sessionToken: '',
// region: 'us-east-1'
// };
AWSConfig = require(__dirname + '/credentials.json');
// OPTION 3: ENV configuration with AWS_ env vars
if(process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY){
AWSConfig = {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
sessionToken: process.env.AWS_SESSION_TOKEN,
region: process.env.AWS_DEFAULT_REGION || 'us-east-1'
};
}
if (!AWSConfig || !AWSConfig.accessKeyId) {
return console.log('ERROR: Invalid AWSConfig');
}
var skipRegions = []; // Add any regions you wish to skip here. Ex: 'us-east-2'
// Custom settings - place plugin-specific settings here
var settings = {};
// STEP 1 - Obtain API calls to make
console.log('INFO: Determining API calls to make...');
var apiCalls = [];
for (p in plugins) {
for (a in plugins[p].apis) {
if (apiCalls.indexOf(plugins[p].apis[a]) === -1) {
apiCalls.push(plugins[p].apis[a]);
}
}
}
console.log('INFO: API calls determined.');
console.log('INFO: Collecting AWS metadata. This may take several minutes...');
// STEP 2 - Collect API Metadata from AWS
collector(AWSConfig, {api_calls: apiCalls, skip_regions: skipRegions}, function(err, collection){
if (err || !collection) return console.log('ERROR: Unable to obtain API metadata');
console.log('INFO: Metadata collection complete. Analyzing...');
console.log('INFO: Analysis complete. Scan report to follow...\n');
if (exportCSV) {
var writer = csvWriter({headers: ["category", "title", "resource", "region", "statusWord", "message"]});
writer.pipe(fs.createWriteStream('results.csv'));
}
async.forEachOfLimit(plugins, 10, function(plugin, key, callback){
plugin.run(collection, settings, function(err, results){
for (r in results) {
var statusWord;
if (results[r].status === 0) {
statusWord = 'OK';
} else if (results[r].status === 1) {
statusWord = 'WARN';
} else if (results[r].status === 2) {
statusWord = 'FAIL';
} else {
statusWord = 'UNKNOWN';
}
console.log(plugin.category + '\t' + plugin.title + '\t' +
(results[r].resource || 'N/A') + '\t' +
(results[r].region || 'Global') + '\t\t' +
statusWord + '\t' + results[r].message);
//if the user asks us to export to csv lets do that
if (exportCSV) {
writer.write([plugin.category, plugin.title, (results[r].resource || 'N/A'), (results[r].region || 'Global'), statusWord, results[r].message])
}
}
callback(err);
});
}, function(err){
if (err) return console.log(err);
});
if (exportCSV) {
writer.end();
console.log('INFO: Results available at results.csv.');
}
});
Based on user feedback, CloudSploit is prioritizing support for Google Cloud Platform as its next cloud (following the beta rollout of Azure, Oracle, and GitHub support).
We've begun the initial investigation required for authentication and collection interfaces, but are interested in community recommendations for an initial set of security checks and controls.
Please comment or vote on security checks for GCP that you would like to see.
monitoringMetrics.js looks for specific patters in order to match metrics. However, there are a number of patterns that do not match how the AWS CLI actually returns data. This causes false positives for failed best practices. Some of the patterns are clearly wrong - have extra quote or curly brace that would never be a valid pattern.
Tossing more ideas together... Sorry this is a messy post and is really just note-taking form and not truly organized clearly
1 - Use GitHub OAuth with repo
,read:repo_hook
, write:repo_hook
,user:email
scopes
2 - Borrow Yelp's pre-commit-hooks
-detect_aws_credentials.py
logic
3 - Glue that logic into new webhooks which logic from detect_aws_credentials
and email the user that they've pushed AWS credentials. Here's a out-of-the-box- Node.JS webhook handler
app...
GitHub notes also:
Note: If you are building a new integration, you should build it as webhook. We suggest creating an OAuth application to automatically install and manage your users’ webhooks. We will no longer be accepting new services to the github-services repository.
Other blah note links https://developer.github.com/v3/git/commits/#get-a-commit
Terminated instances are by design no longer in a VPC. As a result they produce false positives for the "Detect EC2" Classic Instances" scan. detectClassic.js should filter out instances that are in the "terminated" state.
Can plugins be added to check for mandating HTTPS and minimum TLS versions with CloudSearch?
Suggested addition to collect.js:
// ADDED PROXY CONFIGURATION
if (process.env.http_proxy) {
console.log('INFO: Setting proxy to [' + process.env.http_proxy + '] from the environment variable http_proxy...');
var proxy = require('proxy-agent');
AWS.config.update({
httpOptions: { agent: proxy(process.env.http_proxy) }
});
}
It seems that Aurora Postgres does not support publishing logs to CloudWatch Logs. Postgres RDS does, and Aurora MySQL does. I do not see anything in the Aurora Postgres docs which indicates that this is specifically a missing feature, but I am unable to select it, and I see a some people asking about this as a feature (no reply) on a couple web forums (https://www.reddit.com/r/aws/comments/9nbcev/aurora_postgres_error_logs_and_cloudwatch/, https://dba.stackexchange.com/questions/237564/how-can-we-publish-aurora-postgres-compatible-logs-to-cloudwatch), so I'm pretty sure it is not possible.
As such, the RDS Logging Enabled Cloudsploit Plugin reports this as an error and there is no way to fix it other than suppressing the alert. This would be more-graceful for the plugin to ignore Aurora Postgres databases until AWS supports this feature.
When doing an NPM install I get an error "No compatible version found: async@^2.0.0".
Any help?
Thanks!
Hello,
Are there meant to be two storageaccounts directories in the azure dir?
https://github.com/cloudsploit/scans/tree/master/plugins/azure
When I run CloudSploit I can a azure module error referring to storageAccountsEncryption.js
Is it due to the two storageaccounts - when there should be one?
Thanks
We're currently getting a false positive from the CloudTrail Enabled plugin regarding not having global services enabled.
We have multiple active trails in our AWS account that get funneled to different downstream services. In order to avoid getting repeated global services events from every trail, only 1 trail in our account has IncludeGlobalServiceEvents
enabled. The plugin check currently breaks out of the for-loop after finding the first trail that is enabled (isLogging = true
). Because of this it is not able to tell that another trail in describeTrails.data
does indeed have global services enabled and misreports the configuration error.
I think possible filter for that kind of keys could be an alias that usually contains aws/* or "KeyManager": "AWS", .
https://github.com/cloudsploit/scans/blob/36fac15388486d418b53597fb264807e3a6982e1/plugins/aws/kms/kmsKeyRotation.js#L59
The IAM role system for running in ECS makes use of and environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
.
From http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html, performing:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
will produce:
{
"AccessKeyId": "ACCESS_KEY_ID",
"Expiration": "EXPIRATION_DATE",
"RoleArn": "TASK_ROLE_ARN",
"SecretAccessKey": "SECRET_ACCESS_KEY",
"Token": "SECURITY_TOKEN_STRING"
}
This could then be directly assigned to the AWSConfig variable.
Could this method of obtaining creds be added as a default in index.js
?
Thanks!
Could you explain where/how the list of insecure ciphers in insecureCiphers.js was generated?
We got a couple of hits for insecure ciphers when running cloudsploit on our AWS account. We're using the recommended configuration from Mozilla (modern compatibility) which includes some of the ciphers that cloudsploit flags as insecure.
Thanks,
A new dependency was added but not included in package.json. This breaks clients who follow the standard workflow since they will not have the dependency.
See this change for a fix and small refactor to reduce the likelihood in the future #186
Hi, this is not an issue but a feature request. Do you plan to extend CloudSploit capacities to open source clouds like OpenStack ? This could be a very interesting feature...
I am getting below error while running for my aws account
/Users/abhinav/cplodsploit/scans/node_modules/aws-sdk/lib/request.js:31
throw err;
^
RangeError: Maximum call stack size exceeded
at /Users/abhinav/cplodsploit/scans/index.js:85:16
at replenish (/Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:836:21)
at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:842:29
at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:804:16
at /Users/abhinav/cplodsploit/scans/index.js:112:13
at /Users/abhinav/cplodsploit/scans/plugins/sns/topicPolicies.js:115:4
at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:365:16
at replenish (/Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:831:29)
at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:842:29
at /Users/abhinav/cplodsploit/scans/node_modules/async/dist/async.js:804:16
Hi!
Thanks for the tool, we found it really useful for our cloud env.
I wonder if it's possible to audit all sub-accounts from master (payer) account by assuming role in them instead of having to create an IAM user in every single subaccount and assigning security audit policy to it.
This approach would be much easier to manage and will also cover new subaccounts everytime they are created.
Thank you!
Improve suppression UX and add Filtering.
(via ticket) Suppressing is a bit wonky. When you suppress you get a notification up top but thre is nothing on the actual item to indicate it has now been suppressed. It would be nice to change the status from FAIL to SUPPRESS or SUPPRESSED so you know the item has been suppressed. in addition it would be a nice way to be able to filter
Check that AWS WAF is enabled for published APIs in API Gateway to protect against attacks.
High Level process
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.