Last Updated on January 18, 2024
To support enterprise-wide threat detection and response, a Security Information Event Management (SIEM) solution needs to pull in data from cloud-based and on-prem sources, from endpoint detection and response (EDR) tools to various logs and other security data sources.
Panther Labs sums up the goal as, “Detect any threat, anywhere.” Easier said than done with most SIEMs today.
Panther’s CEO, Jack Naglieri, explores Panther’s ability to cost-effectively ingest arbitrary security data at scale on a recent episode of The Virtual CISO Podcast. The show’s host is John Verry, Pivot Point Security CISO and Managing Partner.
A security data warehouse
Jack sums up the data transformation power of Panther: “The really powerful thing we’ve built is if any company has custom internal data or they have data that we just don’t support yet, but it goes to an Amazon S3 bucket or a queue or something, you can connect that into Panther and you can infer the schema of that data.”
On the other side of that inference process you get, in effect, a security data warehouse.
“We’ll basically ETL the data transformer for you,” Jack summarizes. “And we’ll give you the ability to infer that schema—pull out the indicators like domains, IPS, hashes… things that you care about, that you want to pivot on. And you can write Python on it as it streams.”
This works with any arbitrary data that you have. In the traditional SIEM realm, that analytic capability could take years to build and significant operations overhead to maintain.
“Just removing the Ops overhead for a system like that is such a massive win,” emphasizes Jack.
How Panther “normalizes” arbitrary security data
But what is Panther actually doing in that “ETL” phase? How is it “normalizing” security data for analysis?
“Normalization has a lot of different definitions,” clarifies Jack. “There’s normalization where you’re normalizing into the same format, because a lot of security logs exist as CSVs or JSONs or HTTP style logs or whatever. And there’s a normalization from raw log to structured log, which is more like parsing the log into its structure. And then there’s a normalization step after that.”
Jack continues: “The things that we focus on normalizing today are pulling out indicators and normalizing event times. Then in our rule engine, we have this mechanism of… ‘I want to look at all dest port 22 traffic that X, Y, Z (that other thing I care about) and I want to search across all of my logs.’ We have the ability to that in real-time today. And then historically [that is, in a data lake] it’s a bit more around pivoting on indicators and things like that.”
What’s next?
To enjoy the complete podcast episode with Panther CEO Jack Naglieri, click here.
Wondering if open source has a place in your security analytics arsenal? Check out this podcast on a leading open source solution for querying endpoints: EP#81 – Mike McNeil – Is Open Source the Future of Endpoint Security?