May 7, 2020

Data Virtualization

Explore ZigiOps: Next-gen integration with Data Virtualization.

Blog
Business
Enterprise

Having an efficient and scalable data management strategy is a must for organizations nowadays  and using a data virtualization layer can be a useful approach in handling the growing complexity of operations. It does come with its flaws and limitations, but its a very good solution in many cases.

So, what is data virtualization?

Data virtualization is a data layer, which integrates, manages and delivers data from all enterprise systems and delivers it to business users. The main goal is to provide a single point of access to data from different sources, as well as a single customer view of the available data.

The six ingredients, which build data virtualization are:

  1. Data Layer The data virtualization approach provides a new way of accessing, managing, and delivering data without replicating it.
  2. Data Integrations Data virtualization integrates data from all enterprise systems in your organization, regardless of their location and format.
  3. Data Management Data virtualization provides a single centralized layer, where all unified data from different systems can be accessed by users.
  4. Real-Time Synchronization Data virtualization synchronizes information from different sources in real-time.
Multiple-cloud-network-files-computer-icons-isometric-view

Why do we use data virtualization?

ZigiOps uses a data virtualization layer to provide you with a real-time overview of your data from all systems, without replicating it. All information is unified, and you have a clear, well-structured overview of it in our user-friendly UI.

We've used a data virtualization layer from the beginning. We had already understood its potential, and had previously used it in different environments; it is a choice that we can now be certain is the right one.

Let us tell you more about the reasons for that.

ZigiOps deep integrations go beyond the data that's obvious and easily available in different software tools. Deep integrations would be very time- and resource-consuming without a data virtualization layer, which made it the logical solution, as well as the perfect starting point for accessing the deeper levels of data and synchronizing them across different applications.

Data virtualization allows you to directly access your data without the need for additional infrastructure, or replication. Storing data at another location always requires additional resources and software, i.e. it comes at additional costs, and is also time-consuming, and increases the risk of errors.

We offer a wide range of integrations for a number of different software tools, and our data virtualization layer helps us make sure that the integrations are executed quickly. Pre- and post-processing (which slow things down) are generally not necessary, and the data is directly loaded at its target destination.

Additionally, data becomes much more malleable and flexible thanks to the possibilities that a data virtualization layer offers: its easier to modify different elements, even for past events.

The process of implementing additional features, or of standardizing different segments also becomes simpler and faster: instead of reorganizing a database, we can simply extract information and transform it. This gives us the possibility to accommodate to use cases that are much more complex, in order to help our clients solve problems that are particularly challenging for them, and that require unconventional solutions.

Lets now look into a specific example.

If a given application (f.. one that is monitored by Dynatrace) that runs on a virtual machine (f.e. monitored by vROps) encounters an availability disruption, Dynatrace will detect it and create a problem for that. However, the actual problem might not be in the application itself  the virtual machine might have been stopped for maintenance.

ZigiOps, based on the discovered scheme (i.e. the sum of the fields and possible relationships of all connected systems), will capture the problem in Dynatrace, but, before submitting, it will perform a few checks. It will inspect whether there are any issues with the related resources in other systems, whether there is a change that has been submitted in the ITSM system for this application/VM, or if there is an already existing incident. Only then ZigiOps will execute the desired operation.

Data Virtualization Benefits

To summarize, data virtualization has a number of benefits, and there are many reasons why it is the right approach in many situations.

Those include:

  • Instant access to your data  one of the main benefits of using a data virtualization layer is having real-time access to all of your data at a single, unified place.
  • Extracting data from disparate sources  Data virtualization can extract data from all sources and types, regardless of its formatting.
  • A single point of access  With data virtualization the risk of errors and loss of data is significantly reduced.
  • The rest of the infrastructure can continue to function as usual  data virtualization complements existing data infrastructure, allowing it to maintain its functionalities without any disruptions.
  • Reduced data storage needs and costs the data does not need to be replicated or moved around, which means that a data virtualization layer is cheaper to maintain in the long run  and that costs are predictable.
  • User-friendly overview of the company's data  depending on how the data virtualization layer is built and used, it can provide you with a comprehensive overview of all of your data.

Some shortcomings and possible downsides of data virtualization

Of course, data virtualization comes with its own challenges and limitations, too.

Creating a data virtualization layer is not simple, and it requires lots of highly specialized expertise, which means that its not necessarily an approach that is viable  or necessary  for every organization. It doesn't solve a specific problem or a set of problems, but is rather a method that could be used in combination with other methods.

Additionally, using data virtualization to extract and load data makes you dependent on different systems uptime, which is crucial for hybrid integrations. If a system is not available at a given moment, data cannot be extracted from it, either. In general, this problem presents itself less and less often, but it is still important to keep it in mind.

Data virtualization is not applicable to every project and isn't convenient for every type of data. Sometimes, more traditional approaches could have an added benefit, or be easier to introduce and maintain.

The takeaway on data virtualization

Companies employ different strategies to manage their data, and there isn't a single right answer. The growing complexity of business operations leads to an ever-increasing number of disparate data sources that each company is using, and the architecture of data infrastructure is becoming progressively more complicated as a result.

Data virtualization allows for real-time synchronization of heterogeneous data sources without the need to replicate data, thus minimizing infrastructure costs. It guarantees a dynamic & flexible data exchange, and, whenever necessary, data transformation.

It is not a one-size-fits-all solution, but rather an instrument that can be used in many different contexts  and it is particularly convenient for deep integrations between enterprise software tools, due to its unrivaled time- and cost-effectiveness.

Share this with the world

Related resource:

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies. View our Cookie Policy for more information