How many times have businesses been told that data is their most important and strategic asset? It’s the liquid gold, the lifeblood, the new source code that keeps the wheels of business turning in the digital era.

Data is the doorway to new business models, faster time-to-market and competitive differentiation. However, could the next evolution to shift the dynamics of data and intelligence be DataOps (the process of managing data for descriptive, predictive, prescriptive and cognitive analytics, leveraging artificial intelligence). Organizations must gain actionable intelligence and operationalize data pipelines that reshape their successes in the digital economy. What does is DataOps?

Gartner defines DataOps as a collaborative data management practice focused on improving the communication, integration and automation of data flows between data managers and data consumers across an organization. The goal of DataOps is to deliver value faster by creating predictable delivery and change management of data, data models and related artifacts. Accelerated DataOps is an operational definition of Gartner’s definition, and while there could be many who claim DataOps, doing it effectively and efficiently is key.

Accelerated DataOps needs to:

  • Provide actionable intelligence for business intelligence and artificial intelligence pipelines alike, while catering to a multitude of diverse I/O requirements.
  • Provide operational agility for continuous improvement/continuous development (CI/CD) pipelines, whether on-premise or in the cloud.
  • Provide end-to-end governance and security for data in-flight and data at rest.

In essence, the enterprise needs to enable accelerated DataOps by solving challenges around storage, workflow and architecture. And while AI and machine learning are maturing, there are a few challenges to overcome:


1. Multi-workload convergence

AI is increasingly converging the traditional high-performance computing and high-performance data analytics pipelines, resulting in multi-workload convergence. Data analytics, training and inference are now being run on the same accelerated computing platform. The result of these transitions is different stages within the AI data pipelines, each with distinct storage and requirements.


2. Data anywhere with EdgeAI

IoT and 5G have introduced AI at the edge, known as EdgeAI, which is expected to be even bigger than the cloud. The “edge” includes anything from the autonomous vehicle to the IP camera—from the magnificent to the mundane—yet, every point on the edge needs infrastructure capable of handling everything from core to cloud data pipelines. Architectures have to cater to performance at scale, and storage cannot be limited to traditional storage stacks, as this cannot deliver insights at the scale required for these new workloads.


3. Next-gen data lakes

High-performance data lakes need to have the scale to meet the compute power and parallelism, with the ease of use of POSIX (portable operating system interface). Storage platforms need to have the ability to provide transparency, reproducibility of experiments, end-to-end security and, consequently, explainability.

The challenge is that organizations need to ensure their purchasing decisions are capable of leveraging accelerated DataOps in the future while minimizing the problems within their architecture now - and the first step starts with deploying LogZilla’s Network Event Orchestrator.

Posted 
August 18, 2020
 in 
Technology
 category

More from the

Technology

 category

View All