In this week’s DevOps news, we learn that developers spend an increasing amount of time cleaning up data, rather than working on solutions that deliver value to the company. This, according to a survey by SD Times and Melissa. Surveyed developers said they spend approximately one full day per week wrangling with data issues, which they attribute to duplicate data, inconsistent data, and incomplete data as well as old or incorrect data and misfielded data. More than half of the respondents revealed that they are involved in data quality input, data quality management, choosing validation APIs or API data quality solutions, and data integration.
IT Modernization and DevOps News Week in Review 8.17.2020
- HashiCorp announces the availability of a new Business tier for its Terraform Cloud. Touted as offering features for advanced security, compliance, and governance, the new tier offers single sign-on, audit logs to view significant events, the ability to scale multiple concurrent jobs on Terraform, and added service level agreements for greater support flexibility.
- Datadog introduces several new products, including:
- Continuous Profiler, a code profiler that measures the performance of code in production.
- Compliance Monitoring which identifies misconfigurations that cause compliance drift.
- Error Tracking automatically collects real-time application errors, aggregating them into actionable issues for engineering teams.
- Incident Management that seeks to streamline on-call response workflows for DevOps teams with unified alerting data, documentation, and collaboration.
- Marketplace, an online platform for Datadog partners to develop and sell applications and integrations built on Datadog.
- Microsoft introduces TensorFlow Recorder, an open source project that enables data scientists, data engineers, or AI/ML engineers to create image-based TFRecords with just a few lines of code.
- Jetbrains releases beta of Space, its free team environment for development processes collaboration.
- Go releases Go 1.15 with improvements to the Go linker, improved allocation for small objects at high core counts, X.509 CommonName deprecation, support for the GOPROXY to skip proxies that return errors, a new embedded tzdata package, and several Core Library updates.
- New Relic and Grafana Labs team up to deliver a new integration. According to a joint announcement, “Prometheus users can use the Prometheus remote write capability to send metric data directly to New Relic’s Telemetry Data Platform with a single configuration change. Additionally, Grafana open source users can now add the Telemetry Data Platform as a Grafana data source using Grafana’s native Prometheus data source.”
- AWS makes Amazon Braket generally available. The solution fully managed service helps scientists, researchers, and developers experiment with computers — from classically-powered circuit simulator to quantum computers from D-Wave, IonQ, and Rigetti — in a single place.
- AWS launches the general availability of its AWS Security Hub Automated Response & Remediation solution. The reference implementation offers enterprises a library of automated security response and remediation actions to common security
- AWS unveils AWS Glue 2.0 The latest version features Spark ETL jobs that start 10x faster. According to the company, “this reduction in startup latencies reduces overall job completion times, supports customers with micro-batching and time-sensitive workloads, and increases business productivity.” Other new AWS Glue updates include the ability to stop and restart your workflows and the capability to indicate the maximum number of concurrent runs for your Glue workflow.
- AWS CodeDeploy supports deployments to VPC endpoints. Thus, enabling operators to deploy internal applications without using an Internet gateway, public IP addresses, or a VPN connection.
- Operators can now measure the rate of network traffic to and from containers in ECS Container Insights in Amazon CloudWatch or the ECS task metadata endpoint as Amazon Elastic Container Service makes available additional network metrics for containers.
- AWS integrates Amazon Kinesis Data Firehosewith the MongoDB Cloud Operators can now stream data through Amazon Data Streams or directly push data to Kinesis Data Firehose and configure it to deliver data to MongoDB Atlas with Kinesis Data Firehose HTTP endpoint delivery.
- For enhanced cloud cost control, Amazon adds AWS Fargate for AWS EKS to its Compute Savings Plans.
- Last, our AWS Consulting team enjoyed this AWS blog on how to build DISA STIG-compliant Amazon Machine Images using Amazon EC2 Image Builder.
- NTT DATA announces Deploy Containers for AWS. When beginning the containerization process, teams must answer dozens of questions — many of which have ramifications down the road — making those decisions all that more important. Unfamiliar with the terrain, teams often find themselves in analysis paralysis as they look to educate themselves and avoid making poor design choices. Enter our new reference architecture that helps shortcut the process, shrinking weeks from the containerization process.
- In his book, Good to Great, Jim Collins describes an important business cycle called the doom loop, a negative cycle created by reaction without understanding. The doom loop illustrates the importance of the OODA (observe-orient-decide-act) loop as a model for responding to shifting market situations with intelligent agility. Read how apply smart agility in our latest blog insights: Smart or Stupid Agility: Which are you?
Written by Flux7 Labs
Flux7, an NTT DATA Company, is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.