Last week we discussed The Quest for Unlimited Ingest, Part One; the implication of replacing Splunk’s long-loathed ingest volume-pricing model. However, we wanted to further explore the reason why every company must collect ALL logs, not just the ones you can afford.

Nothing is free when it comes to Splunk or any of the other cloud-based SIEM vendors like Elastic, Datadog, Sumo Logic, and many others. As you send more logs, the inefficiency and exorbitant cost structures of those platforms become exposed; your budget and your checkbook get brutalized.

Splunk first introduced a workload-based pricing (per-processor) model in September of 2019 and justified the introduction since many of its customers were sending only 10-15% of their daily machine data volumes due to the high cost.

That meant that Splunk’s customers were leaving seemingly innocuous, but valuable data on the floor. This was among the key reasons for the success of the SolarWinds hack since [a] user was surreptitiously elevated to become an admin user, and no one noticed. The more recent Microsoft Exchange hack further exposed this major flaw since Windows logs are ‘chatty’ and no one can afford to send all their logs to any of the Centralized Log Management or cybersecurity platforms.

We described in recent blogs the underlying resource pricing model of Splunk, Elastic, Datadog, and Sumo Logic is very expensive. Each of these vendors are overcharging customers - the platform architecture is incapable of processing large volumes of data on a single server/instance.

TWO KNOWN FACTS:

  • The more data you send to Splunk or any other platform, the more resources you need to add, and the more you will pay
  • Horizontal scaling is not really scaling, and it’s certainly not free

Did you also know that the per-processor pricing may preclude customers from adding more powerful servers? Goodbye to that speedy high-performance 64 core dual proc server you want to add. Further, if you need one search head server per 10 people, how many cores are needed in those search head servers, and how many will you need?

What are your options for maximum savings?

LogZilla NEO ingests, indexes, and displays 10 TB per day on a single 1U server? When using our multi-patented preduplication algorithm, you can reduce 40%-70% of the data flow to Splunk without losing a single byte, then send that enriched data downstream to Splunk, not just the raw data.

The two top benefits of using LogZilla NEO for preduplication include improving the effectiveness and productivity of the downstream Splunk analyst and the significant cost savings, which better allows your organization to invest in other tools and infrastructure.

Ready to watch how LogZilla NEO generates an ROI of greater than 40% in less than 90 days? …That’s right… the more data you send per day, the higher the ROI, and the faster the payback!

Schedule your 15-minute DEMO now!

Posted 
March 12, 2021
 in 
Technology
 category

More from the

Technology

 category

View All