In an earlier blog post, we showed how much Splunk could save by reducing the data flow and the number of resources used by cloud hosting vendors like AWS and Azure. Those savings could be passed to end-users.
How do you know you are being overcharged by any of these cloud vendors?
What’s the "Tell?”
Answer: When vendors like Splunk, Elastic, DataDog, Sumo Logic, and anyone else tells you that they can horizontally scale, but then charge you for resources used, as they “horizontally scale.” (Pro Tip: that’s not real scale).
How do you force their hand to lower pricing?
This post shows similarly large and proportional Cost of Goods Sold (COGS) $ saving results for Elastic, DataDog, and Sumo Logic - see the data below.
First, here is some context.
Problem: Each day you send IT Architecture and machine data to some cloud-based log management or SIEM services provider. However, sending an extra 100 - 200 GB of data means that you are forced to add a new resource, or two, or three — and your vendor’s pricing matrix is a combination of the volume of data sent plus all the resource usage.
The Not-So-Great Outcome: The more data you send, the more resources required, and the more you pay.
Why Are you Overpaying? Splunk, Elastic, Sumo Logic, and DataDog are all public companies that provide cloud-based log management and SIEM services. Each quarter they report their cloud business operation revenues, cloud operation cost of goods sold (COGS), and therefore, calculating cloud operation margins is simply the difference between the two. Each quarter, they report the money they collect from customers like you, and the amount of money they pay to their hosting vendors like AWS and Azure. Those are hard dollars paid to Amazon and Microsoft, reported every quarter, that are subsequently passed onto their customers - that’s you again. They break out the cloud segments since they want to show you how fast their cloud operation is growing and they are keeping up with the digital transformation currently occurring in the economy. Recall that Microsoft said in April that it experienced two years of digital transformation to its Azure platform in two months. Everyone appears to be crowding through the same doorway, and the result is that the end-user is overpaying.
Key Question: How do you save money without losing any data fidelity, maintain the source data, reduce the data flow sent downstream, reduce the number of resources used, do so in true real-time, and still be able to use your current downstream SIEM?
If you can do all this, it should be a no-brainer, right?
Once again, let’s use the cloud margin numbers of these vendors as a heuristic device to show you how much money could be saved by these vendors. If the end-user positions LogZilla NEO of their respective platforms, those savings should flow to the end-user (you), rather than to those vendors. We should also note that the numbers below reflect the savings in the first year alone. Therefore, as the cloud business grows, the savings increase proportionately.
Using Elastic's actual and estimated financial results, the sum of the quarterly reductions equal $51 Million at 50% dedup and $74 Million at 70% dedup.
Interestingly if Elastic spends less on cloud host resources, their cloud operating margins increase by about 1200 basis points and their overall Company gross margin increases by about 1000 basis points - every quarter
Using DataDog's actual and estimated financial results, the quarterly reductions sum to $66 Million at 50% dedup and $92 Million at 70% dedup.
Interestingly if DataDog spends less on cloud host resources, their overall cloud operating margins increase by about 1100 basis points - every quarter
Using Sumo Logic's actual and estimated financial results, the sum of the quarterly reductions equal $31 Million at 50% dedup and $41 Million at 70% dedup.
Interestingly if Sumo Logic spends less on cloud host resources, their overall cloud operating margins increase by about 1500 basis points - every quarter
The bottom line is that the end-user is spending so much on those downstream services because those platforms require the end-user to add additional resources for every 100-200GB of IT Architecture and machine data that you send - so-called “horizontal scaling”.
LogZilla NEO to the Rescue
LogZilla solves this problem in 30 seconds. Position LogZilla NEO in front as the Manager of Managers, or “MoM.”
As a customer, using fewer resources downstream means less cost to you! It also means that your downstream software stacks will run more efficiently, will communicate across the various data silos, meaning that you will be more productive.
Deduplication is LogZilla’s patented algorithm to reduce eliminate the duplicate data flow and eliminate noise. This is done in true real-time, and it’s done at full fidelity. The second the data is ingested is the second it’s indexed, is the same second it appears on your monitor. True real-time. True full fidelity of the real-time data stream. LogZilla’s patented Deduplication algorithm maintains all your source data and your source-tracking to maintain that data integrity.
Depending on the configuration of your IT Architecture, LogZilla NEO gets 40-60% dedup out of the box, and about 70% dedup after some tweaking to the unique characteristics of your IT Infrastructure. The reduced data load also means that the performance of your IT Architecture is improved. LogZilla’s data enrichment and automation are built in – no extra charge.
Eliminate Separate Silos of Information So All Teams Can Communicate More Effectively
It’s well-known, well-understood and universally accepted that different teams within your organization have always had their separate silos of data. It’s very difficult to manage an entire organization’s Cyber and IT Architecture infrastructure when no one talks to the other, and no one allows you to touch their data. In addition, we all know that the existence of separate data silos is the direct result of the antiquated architecture of legacy software vendors. The result is that separate silos prevent the different software stacks from communicating with one another efficiently, thus, prohibiting actions or resolving incidents in real-time.
LogZilla NEO overcomes that hurdle as it can process massive amounts of data on a single instance. Using LogZilla NEO as the “MoM” means that the user can ingest, index, and display 10 TB/Day on a single instance, and up to 20TB/day if LogZilla NEO is used as a simple pre-processing forwarder. The simple math means that if you need 100 instances to process 10 TB/day without LogZilla NEO, and LogZilla NEO provides 50% dedup at the front end, then you only need 51 instances - meaning that your cost is cut in half. At 70% dedup, you would only need 31 instances. You can calculate your savings: A reduced SIEM license, and a reduced need for downstream cloud instances.
Lastly, Elastic recently announced its intention to “fork” its codebase. You already knew that Elastic was charging you for the enhancements and plug-ins contained in the X-Pack. Our view is that Free isn’t free, but it’s a great marketing pitch. Not only will you have to pay for Elasticsearch, Kibana, and X-Pack, but signing the SSPL means that you may have added unnecessary business risk to your company. The innocuous search bar driven by Elasticsearch that helps your customers quickly filter between women’s shoes may force you to reveal all your web source code. In the words of the SSPL: “…enabling third parties to interact with the functionality…” Ouch, indeed.
We can show you the overall $$ savings of positioning LogZilla NEO in front very quickly.
Look at how much you have to gain!