ELK Stack Introduction
E = Elasticsearch
A database used to store logs such as winevent logs, firewall logs, sys logs etc. Uses Piped query language ES|QL Works with Restful APIs and JSON
L = Logstash
Collects telemetry from various sources. Also transforms, filters, and outputs it into Elasticsearch instance.
Ways to collect telemetry: Beats & Elastic Agents
Beats:
- File Beat - for Logs
- Metric Beat - for Metrics
- Packet Beat - for Network Data
- Winlog Beat - for Windows Events
- Audit Beat - for Audit Data
- Heartbeat - for Uptime
Elastic Agent: Can collect all types of data using one unified agent per host.
Logstash also allows for customizability which helps to ingest what you ultimately want, which in turn helps reduce ingestion cost. Also not all logs will have a parser ready to go. We can also create and configure our own parser to pull out certain values out of a log and then map it to a field name.
K = Kibana
A web console to query for our logs stored within our Elasticsearch instance. Kibana Lens is a visualization UI where we can drag and drop elements to help build our dashboards.
There is also a discover tab which allows us to query for our logs using ES|QL. There is also Machine learning for anomaly detection, Elastic Maps for geospatial information, Metrics, Alerts etc. All of these allows us to quickly:
- Search for data
- Create visualizations
- Create reports
- Create Alerts
Similarities to Splunk
Elasticsearch = Indexer / Search Head Logstash = Heavy Forwarder Kibana = Web GUI Beats/Agents = Universal Forwarders
Benefits of using ELK stack
- Centralized Logging (helps to meet compliance requirements & search through data if a security incident were to occur)
- Flexibility (customized ingestion)
- Visualizations (observe information at-a-glance)
- Scalability (easy to configure to handle larger environments)
- Ecosystem (many integrations and rich community)