IT Modernization and DevOps News in Review
This week saw cloud security in the headlines again with yet another public disclosure of a misconfigured Amazon S3 bucket that left data wide open to the public. With Black Hat in Vegas this week, it may be little wonder that we also saw several news items of organizations bringing more security tools to DevOps and container-based environments.
A misconfigured data bucket in AWS Simple Storage Service (S3) led to the news this week of sensitive data on thousands of GoDaddy servers being exposed online. This is the second incident within a month — in mid-July an open AWS S3 bucket exposed over 2,500 files of a political autodial company.
It should be noted that Amazon S3 buckets are securely configured by default. As AWS has gone to lengths to make S3 simple yet powerful and secure, the resulting granular control can also sometimes translate into misconfiguration. In our experience, wide open access to an S3 bucket is only granted in two cases:
- when an organization is intending to host a public website there, or
- when an engineer is trying to test something and wants to avoid the hassle of using any authentication or passwords.
The latter often leads to people forgetting that they left something open to the world. This news presents an unfortunate reminder of why good AWS security hygiene is important to designing, building and managing AWS environments.
DevOps Tools News
- Anchore brought to market the newest version of Anchore Enterprise. The new release features a graphical policy editor, rich user interface (UI), dashboard, audit logs, configurable on-premise vulnerability data service, and more. Operators can now create custom policies that cover all aspects of container images including operating systems packages, configuration files, user supplied binaries, and third party software libraries, including Node.JS, Ruby GEMs, Python modules and Java packages.
- In related news, Tripwire launched Tripwire for DevOps, a SaaS solution that integrates security assessments into the DevOps life cycle and toolchain, providing visibility into the security state of underlying application infrastructure throughout the pipeline.
- Ixia announced in a press release that it’s bringing packet-level visibility into workloads in containers and Kubernetes clusters across cloud platforms with a goal to provide the visibility that security and network teams need to diagnose critical security and performance issues in their container-based environments.
- Our DevOps architects were excited by this article by Tracy Miranda that discusses building serverless CI/CD pipelines with Jenkins. Featured on the Jenkins blog, she shares a synopsis of Anubhav Mishra’s talk on the topic.
- On Thursday AWS announced the general availability of Amazon Aurora Serverless. According to the press release, Aurora Serverless is a new deployment option for Amazon Aurora that automatically starts, scales, and shuts down database capacity with per-second billing for applications with less predictable usage patterns. It offers database capacity without the need to provision, scale, and manage any servers, bringing the power of the MySQL-compatible database built for the cloud to applications with intermittent or cyclical usage patterns without the need to manage database servers.
- Amazon announced this past week that Amazon Virtual Private Cloud (VPC) Flow Logs can now be delivered to Amazon Simple Storage Service (S3) using the AWS Command Line Interface (CLI) or through your Amazon EC2 or VPC console. Ideal for when operators need simple, cost-effective archiving of their log events.
- AWS Config added support for AWS Shield this week. Operators can now record their configuration changes to AWS Shield (a managed Distributed Denial of Service protection service) using AWS Config. With AWS Config, you can track changes to the protection settings, such as the resources being protected, and use this information to maintain a configuration change history for audit and operational troubleshooting purposes.
- And, we also saw news from Amazon that AWS Config now enables operators to delete their data by specifying a retention period for configuration items. Specify a retention period (between 30 days and seven years) and AWS Config retains the configuration items for that specified period. AWS Config automatically deletes configuration items older than your specified retention period and automatically stores configuration items for seven years if a retention period is not specified.
If you happen to be in Austin, TX on August 30th, please join us as we host HashiCorp CTO, Armon Dadgar, as he presents on using Consul Connect to secure service-to-service communication. Click here for further information and to RSVP.