You will explore logs and their role in Intrusion Detection Systems (IDS) and Security Information Event Management (SIEM) systems. You’ll learn how these systems detect attacks. You’ll also be introduced to some IDS and SIEM products. In addition, you’ll write basic IDS rules to provide alerts for malicious network traffic.
Learning Objectives
- Discuss the importance of logs during incident investigation.
- Determine how to read and analyze logs during incident investigation.
- Describe how common intrusion detection system (IDS) tools provide security value.
- Interpret the basic syntax and components of signatures and logs in IDS and NIDS tools.
- Describe how SIEM tools collect, normalize, and analyze log data.
- Perform queries in SIEM tools to investigate an incident.
- Overview of logs
- Video: Welcome to module 4
- Video: The importance of logs
- Reading: Best practices for log collection and management
- Practice Quiz: Test your knowledge: Overview of logs
- Video: Rebecca: Learn new tools and technologies
- Video: Variations of logs
- Reading: Overview of log file formats
- Practice Quiz: Test your knowledge: Log components and formats
- Overview of intrusion detection systems (IDS)
- Examine Signatures with Suricata: A Hands-on Tutorial
- Overview of security information event management (SIEM)
- Video: Reexamine SIEM tools
- Reading: Log sources and log ingestion
- Log forwarders
- Video: Query for events with Splunk
- Video: Query for events with Chronicle
- Reading: Search methods with SIEM tools
- Reading: Follow-along guide for Splunk sign-up
- Practice Quiz: Activity: Perform a query with Splunk
- Practice Quiz: Activity: Perform a query with Chronicle
- Practice Quiz: Test your knowledge: Overview of SIEM tools
- Review: Network traffic and logs using IDS and SIEM tools
- Congratulations on completing Course 6!
Overview of logs
Video: Welcome to module 4
Main points:
- Focus: Importance of logs and alerts in security investigations.
- Logs: Record event details used for analysis and evidence gathering.
- Activities:
- Reading and analyzing logs.
- Interpreting intrusion detection system signatures.
- Searching for relevant events in SIEM tools (Splunk, Chronicle).
- Benefits:
- Create context around alerts.
- Identify malicious behavior.
- Protect systems from attacks.
Key takeaways:
- Understanding logs and alerts is crucial for successful incident investigations.
- Analyzing logs requires knowledge of reading, interpreting, and connecting events.
- SIEM tools help efficiently search and access log data for investigations.
Overall:
This video emphasizes the importance of logs and alerts as valuable resources for security professionals during incident investigations. It introduces activities that train viewers on analyzing logs, interpreting signatures, and using SIEM tools effectively.
History books. Receipts. Diaries. What do all these things have in common? They record events. Whether it’s
historical events, financial transactions, or private diary entries, records preserve event details. And having access to these details
can help us in many ways. Previously, we explored
the different types of processes and procedures involved during each phase
of the incident response lifecycle. In this section, we’ll direct our focus
on one of the key components of incident investigation, logs and alerts. In security, logs record event details and these details are used to
support investigations. First, you’ll learn all about logs,
what they are, and how they’re created. You’ll also learn how to read and
analyze logs. Then, we’ll revisit intrusion
detection systems. You’ll explore how to
interpret signatures. You’ll have an opportunity to apply what
you’ve learned through hands-on activities using a tool called Suricata. Finally, you’ll search in
SIEM tools like Splunk and Chronicle to locate events of interest and
access log data. Events are a valuable data source. They help create context
around an alert, so you can interpret the actions
that took place on a system. Knowing how to read, analyze, and
connect different events will help you identify malicious behavior and
protect systems from attack. Ready?
Let’s begin.
Video: The importance of logs
Focus: Logs as critical tools for detecting and investigating security incidents.
Key points:
- Logs: Records of events occurring within systems, providing timestamps, actions, sources, and details.
- Value: Offer insights into system performance and security monitoring.
- Log analysis: Process of examining logs to identify relevant events.
- Importance of selective logging: Filtering out irrelevant data for efficiency and faster analysis.
- SIEM tools: Centralize and normalize log data from various sources for faster search and analysis.
- Log collection: Log forwarders gather logs from different devices and systems to a central repository.
- Types of log data sources: Network, system, application, security, and authentication logs.
- Example: Analyzing a network log entry, understanding allowed access and timestamp.
Takeaways:
- Logs are essential for comprehensive security monitoring and investigating potential threats.
- Effective log analysis requires understanding event details, timestamps, and data sources.
- SIEM tools play a crucial role in efficient log management and analysis for faster incident response.
Overall:
This video highlights the significance of logs as evidence in security investigations and emphasizes the importance of log analysis and management tools in enhancing network security. The example provided with detailed explanation enhances understanding of log structure and interpretation.
Introduction:
Imagine you’re a detective investigating a crime scene. To piece together the story, you rely on clues like fingerprints, witness statements, and security camera footage. Similarly, in the digital world, logs play a crucial role in cybersecurity investigations.
What are logs?
Logs are like electronic diaries, recording events happening within your systems. They provide detailed information about actions, users, timestamps, and more. Think of them as breadcrumbs left behind by every digital activity, helping you track user actions and potential security incidents.
Why are logs important?
- Detection: Logs act as early warning signs. Unusual activities, suspicious login attempts, or malware execution attempts all leave traces in logs, allowing you to detect potential threats before they cause damage.
- Investigation: When an incident occurs, logs are your go-to evidence. They help you build a timeline of events, identify the source of the attack, and understand how the attacker moved through your systems.
- Compliance: Many regulations require organizations to maintain logs for a specific period. This helps demonstrate due diligence and provides evidence in case of audits or legal proceedings.
- Troubleshooting: Beyond security, logs can help diagnose system performance issues, identify resource bottlenecks, and track user activity for troubleshooting purposes.
Types of logs:
- System logs: Record events within operating systems, applications, and services.
- Network logs: Track network traffic, including source and destination IPs, protocols, and port numbers.
- Security logs: Capture events related to security tools like firewalls, intrusion detection systems, and antivirus software.
- Application logs: Specific to software applications, recording user actions, errors, and other events.
Effective log management:
- Collect from all sources: Ensure you gather logs from all devices and systems within your network.
- Centralize storage: Store logs in a central repository for efficient access and analysis.
- Standardize format: Convert logs to a common format for easier search and analysis across different sources.
- Monitor and analyze: Regularly review logs for suspicious activity and potential threats.
- Retain logs: Keep logs for a defined period based on compliance requirements and investigation needs.
Tools for log management:
- SIEM (Security Information and Event Management) tools: Aggregate and analyze logs from various sources in real-time.
- Log analysis platforms: Offer advanced search capabilities, visualizations, and threat detection features.
- Open-source logging tools: Free and powerful options for smaller organizations or specific needs.
Conclusion:
Logs are invaluable tools for security professionals. By understanding their importance, implementing effective log management practices, and leveraging the right tools, you can turn this digital breadcrumbs into actionable insights, enhancing your overall cybersecurity posture. Remember, strong logs can mean a stronger defense against cyber threats.
Additional tips:
- Conduct regular log analysis drills to practice incident response.
- Implement log tamper prevention measures to ensure log integrity.
- Stay updated on emerging threats and adapt your log analysis to identify new patterns.
By actively utilizing logs, you can transform your cybersecurity defense from reactive to proactive, ensuring your systems and data remain safe in the ever-evolving digital landscape.
Devices produced data
in the form of events. As a refresher, events are observable
occurrences that happen on a network system or device. This data provides visibility
into an environment. Logs are one of the key ways security
professionals detect unusual or malicious activity. A log is a record of events that
occur within an organization’s systems. System activity is recorded in
what’s known as a log file or commonly called logs. Almost every device or
system can generate logs. Logs contain multiple entries which detail
information about a specific event or occurrence. Logs are useful to security analysts
during incident investigation since they record details of what, where, and
when an event occurred on the network. This includes details like date, time,
location, the action made, and the names of the users or
systems who performed the action. These details offer valuable insight,
not only for troubleshooting issues related
to system performance, but most importantly, for
security monitoring. Logs allow analysts to build a story and timeline around various event occurrences
to understand what exactly happened. This is done through log analysis. Log analysis is the process of examining
logs to identify events of interest. Since there are different
sources available to get logs, an enormous volume of log
data can be generated. It’s helpful to be selective in what
we log, so that we can log efficiently. For example, web applications generate
a high volume of log messages, but not all of this data may be
relevant to an investigation. In fact, it may even slow things down. Excluding specific data from being logged helps reduce the time spent
searching through log data. You may recall our discussion
on SIEM technology. SIEM tools provide security professionals
with a high-level overview of what happens in a network. SIEM tools do this by first collecting
data from multiple data sources. Then, the data gets aggregated or
centralized in one place. Finally, the diverse log
formats get normalized or converted into a single preferred format. SIEM tools help process large log volumes
from multiple data sources in real-time. This allows security analysts to
quickly search for log data and perform log analysis to
support their investigations. So how do logs get collected? Software known as log forwarders
collect logs from various sources and automatically forward them to
a centralized log repository for storage. Since different types of devices and
systems can create logs, there are different log data
sources in an environment. These include network logs, which
are generated by devices such as proxies, routers, switches, and firewalls, and system logs, which are generated
by operating systems. There’s also application logs, which
are logs related to software applications, security logs, which are generated
by security tools like IDS or IPS, and lastly authentication logs,
which record login attempts. Here’s an example of
a network log from a router. There are a couple of log entries here, but
we’ll focus on the first line. Here, we can observe a number of fields. First, there’s an action specifying
ALLOW. This means that the router’s firewall settings allowed access from
a specific IP address to google.com. Next, there’s a field specifying
the source, which lists an IP address. So far, the information from this log entry
is telling us that network traffic to google.com from this source
IP address is allowed. The last field specifies the timestamp, which is one of the most
essential fields in a log. We can identify the exact date and
time of an action that’s occurred. This is useful for correlating multiple events to develop
a timeline of the incident. There you have it! You’ve analyzed your
first network log. Coming up, we’ll continue our discussion on logs and
explore log formats.
Reading: Best practices for log collection and management
Reading
In this reading, you’ll examine some best practices related to log management, storage, and protection. Understanding the best practices related to log collection and management will help improve log searches and better support your efforts in identifying and resolving security incidents.
Logs
Data sources such as devices generate data in the form of events. A log is a record of events that occur within an organization’s systems. Logs contain log entries and each entry details information corresponding to a single event that happened on a device or system. Originally, logs served the sole purpose of troubleshooting common technology issues. For example, error logs provide information about why an unexpected error occurred and help to identify the root cause of the error so that it can be fixed. Today, virtually all computing devices produce some form of logs that provide valuable insights beyond troubleshooting.
Security teams access logs from logging receivers like SIEM tools which consolidate logs to provide a central repository for log data. Security professionals use logs to perform log analysis, which is the process of examining logs to identify events of interest. Logs help uncover the details surrounding the 5 W’s of incident investigation: who triggered the incident, what happened, when the incident took place, where the incident took place, and why the incident occurred.
Types of logs
Depending on the data source, different log types can be produced. Here’s a list of some common log types that organizations should record:
- Network: Network logs are generated by network devices like firewalls, routers, or switches.
- System: System logs are generated by operating systems like Chrome OS™, Windows, Linux, or macOS®.
- Application: Application logs are generated by software applications and contain information relating to the events occurring within the application such as a smartphone app.
- Security: Security logs are generated by various devices or systems such as antivirus software and intrusion detection systems. Security logs contain security-related information such as file deletion.
- Authentication: Authentication logs are generated whenever authentication occurs such as a successful login attempt into a computer.
Log details
Generally, logs contain a date, time, location, action, and author of the action. Here is an example of an authentication log:
Login Event [05:45:15] User1 Authenticated successfully
Logs contain information and can be adjusted to contain even more information. Verbose logging records additional, detailed information beyond the default log recording. Here is an example of the same log above but logged as verbose.
Login Event [2022/11/16 05:45:15.892673] auth_performer.cc:470 User1 Authenticated successfully from device1 (192.168.1.2)
Log management
Because all devices produce logs, it can quickly become overwhelming for organizations to keep track of all the logs that are generated. To get the most value from your logs, you need to choose exactly what to log, how to access it easily, and keep it secure using log management. Log management is the process of collecting, storing, analyzing, and disposing of log data.
What to log
The most important aspect of log management is choosing what to log. Organizations are different, and their logging requirements can differ too. It’s important to consider which log sources are most likely to contain the most useful information depending on your event of interest. This might be configuring log sources to reduce the amount of data they record, such as excluding excessive verbosity. Some information, including but not limited to phone numbers, email addresses, and names, form personally identifiable information (PII), which requires special handling and in some jurisdictions might not be possible to be logged.
The issue with overlogging
From a security perspective, it can be tempting to log everything. This is the most common mistake organizations make. Just because it can be logged, doesn’t mean it needs to be logged. Storing excessive amounts of logs can have many disadvantages with some SIEM tools. For example, overlogging can increase storage and maintenance costs. Additionally, overlogging can increase the load on systems, which can cause performance issues and affect usability, making it difficult to search for and identify important events.
Log retention
Organizations might operate in industries with regulatory requirements. For example, some regulations require organizations to retain logs for set periods of time and organizations can implement log retention practices in their log management policy.
Organizations that operate in the following industries might need to modify their log management policy to meet regulatory requirements:
- Public sector industries, like the Federal Information Security Modernization Act (FISMA)
- Healthcare industries, like the Health Insurance Portability and Accountability Act of 1996 (HIPAA)
- Financial services industries, such as the Payment Card Industry Data Security Standard (PCI DSS), the Gramm-Leach-Bliley Act (GLBA), and the Sarbanes-Oxley Act of 2002 (SOX)
Log protection
Along with management and retention, the protection of logs is vital in maintaining log integrity. It’s not unusual for malicious actors to modify logs in attempts to mislead security teams and to even hide their activity.
Storing logs in a centralized log server is a way to maintain log integrity. When logs are generated, they get sent to a dedicated server instead of getting stored on a local machine. This makes it more difficult for attackers to access logs because there is a barrier between the attacker and the log location.
Key takeaways
It’s important to understand how to properly collect, store, and protect logs because they are integral to incident investigations. Having a detailed plan for log management helps improve the usefulness of logs and resource efficiency.
Practice Quiz: Test your knowledge: Overview of logs
What is the primary purpose of logs during incident investigation?
To provide a record of event details
The primary purpose of logs during incident investigation is to provide a record of event details. Knowing what occurred on systems, networks, and devices helps security analysts identify unusual or malicious activity.
A security analyst wants to determine whether a suspicious login was successful. Which log type would be most useful for this purpose?
Authentication
An authentication log would be most useful for this purpose. Authentication logs record login attempts, including whether a login was successful.
In the following log, what action does the log entry record?
[ALLOW: wikipedia.org] Source: 192.167.1.1 Friday, 10 June 2022 11:36:12
ALLOW
ALLOW refers to the action that has been recorded. In this instance, it allows access to wikipedia.org.
Fill in the blank: _____ is the process of examining logs to identify events of interest.
Log analysis
Log analysis is the process of examining logs to identify events of interest.
Video: Rebecca: Learn new tools and technologies
- The speaker, Rebecca, is a security engineer at Google who specializes in identity management.
- She enjoys thinking like an attacker and figuring out how to break into systems.
- Rebecca shares her experience of feeling overwhelmed when learning about network security on her first day in security.
- She encourages asking questions and not being afraid to admit when you don’t know something.
- This course covers a lot of tools and information, which can be overwhelming, but it’s like climbing a mountain – the view at the top is worth it.
- Completing these courses can improve your capabilities and potential for finding new jobs or changing careers.
[MUSIC] I am Rebecca,
I’m a security engineer at Google, and I focus in identity management. The best part of the job is
probably thinking like an attacker. I love that part of seeing how can
I break stuff, seeing a system and figuring out how can I get into it. If I was a bad guy,
what would I be wanting? What would I be looking for? How would I find the credentials? How would I find the machine
that’s useful, and get onto it? My first day in security,
we were learning a new tool. The whole organization was in a training,
and they’re like, we’re going to throw you in. It’s a one week training to
learn a network analyzer. I didn’t know anything about networks, let alone network security, or
what this thing was going to be used for. And so I was very overwhelmed, because
I felt like I was an imposter sitting in somebody’s seat who
should belong there. And learning stuff way over my head. I pushed through it by asking a lot of
questions, and setting aside that feeling like I should know things, because I’ve
never been exposed to it at that point. The only way I’m going to
know is if I ask. So, this course has a lot of tools,
and covers a lot of information. And it can be very easy to be overwhelmed,
in fact, I probably would be as well. There’s a lot of information
that you can take in. I think of learning in a course like this,
where there’s a series of courses for you to learn,
that it’s like climbing a mountain. You’ve gotten so far up the mountain,
and the air gets thin, and, yes, it is difficult. You feel overwhelmed, but
you’re almost to the top. And know that when you get to the top, you’re going to have
an amazing view of the world. And that’s the same thing of
when you finish these courses. Your frame of mind and
how you view things and your capabilities, your potential for finding new jobs, or
changing careers is that much better.
Video: Variations of logs
Main points:
- Logs: Similar to receipts, record events/activities on networks or systems.
- Importance: Provide valuable information for security analysis and incident response.
- Diverse formats: Syslog, JSON, XML, CSV, etc., with varying levels of detail and readability.
- Key components: Timestamps, system characteristics (IP addresses), event descriptions, actions, and actors.
- Syslog: Common format with header, structured data, and message sections.
- JSON: Easy-to-read text format using key-value pairs.
- XML: Uses tags and keys to structure data.
- CSV: Simple format with data separated by commas.
Key takeaway:
Understanding different log formats is crucial for effectively analyzing and interpreting security events.
Additional notes:
- The video emphasizes the importance of building context around detections using logs.
- The next section will explore how intrusion detection system signatures utilize logs for suspicious activity detection.
Tutorial: Variations of Logs in Cybersecurity
Introduction
In the world of cybersecurity, logs are like digital footprints, recording events and activities that occur within systems and networks. They provide invaluable insights for security analysts, enabling them to detect anomalies, investigate incidents, and track user behavior. However, logs come in a wide variety of formats, each with its own structure and characteristics. Understanding these variations is essential for effective log analysis and interpretation.
Common Log Formats
- Syslog:
- One of the most widely used log formats.
- Employs a header, structured data, and message format.
- Example:
Feb 25 15:33:11 hostname app[12345]: This is a log entry!
- Syslog log format example: https://www.golinuxhub.com/2014/01/syslog-tutorial/
- JSON (JavaScript Object Notation):
- Human-readable, text-based format using key-value pairs.
- Lightweight and easy to parse.
- Example: JSON
{ "Alert": "Malware", "Timestamp": "2024-01-02T10:45:30Z", "Source": "Endpoint123" }
- JSON log format example: https://en.wikipedia.org/wiki/JSON
- XML (Extensible Markup Language):
- Structured format using tags and elements.
- Flexible for representing complex data.
- Example: XML
<?xml version="1.0"?> <log> <event> <timestamp>2024-01-02T14:22:15Z</timestamp> <user>john.smith</user> <action>File access</action> </event> </log>
- XML log format example: https://docs.stackify.com/docs/troubleshoot-errors-and-logs-net-configurations
- CSV (Comma-Separated Values):
- Simple format using commas to separate values.
- Easy to generate and process.
- Example:
Timestamp,User,Action,File 2024-01-02 15:05:00,jane.doe,Login, 2024-01-02 15:10:12,john.smith,File access,report.pdf
- CSV log format example: https://docs.panther.com/data-onboarding/custom-log-types/example-csv
Other Log Formats:
- Windows Event Logs: Log events from the Windows operating system.
- Apache Logs: Record web server activity.
- Firewall Logs: Track network traffic and access attempts.
- Application-Specific Logs: Generated by specific software applications.
Factors to Consider When Choosing a Log Format
- Readability: How easy is it for humans to understand the format?
- Parsability: How easy is it for machines to process the format?
- Structure: How well-organized is the data within the format?
- Standardization: Is the format widely used and supported by tools?
Effective Log Analysis
- Centralize log collection: Gather logs from various sources into a central repository.
- Normalize log formats: Convert logs into a common format for easier analysis.
- Filter and correlate events: Focus on relevant events and identify patterns.
- Visualize log data: Use tools to create charts and graphs for better understanding.
- Automate analysis tasks: Use scripts and tools to streamline analysis processes.
Conclusion
Understanding the variations of logs is a crucial skill for cybersecurity professionals. By mastering different log formats, you’ll be able to extract valuable insights, detect threats, and investigate incidents more effectively, ultimately contributing to the overall security posture of your organization.
When you purchase
an item in a store, you usually receive a receipt
as a record of purchase. The receipt breaks down the transaction information with details such as
the date and time, the cashier’s name,
the item name, cost, and the method of payment. But, not all store
receipts look the same. For example, receipts like
automotive invoices use lots of detail when listing the items or services
that were sold. You most likely won’t find this much detail from
a restaurant receipt. Despite the differences
among store receipts, all receipts contain
important details that are relevant
to the transaction. Logs are similar to receipts. While receipts record purchases, logs record the events or activities that happen
on a network or system. As a security analyst, you’ll be responsible
for interpreting logs. Logs come in different formats, so not all logs look the same. But, they usually contain
information like timestamps, system characteristics,
like IP addresses, and a description of
the event, including the action taken and who
performed the action. We know that logs can
be generated from many different data sources
such as network devices, operating systems, and more. These log sources generate
logs in different formats. Some log formats
are designed to be human-readable while others
are machine-readable. Some logs can be verbose, which means they contain
lots of information, while some are short and simple. Let’s explore some
commonly used log formats. One of the most commonly
used log formats is Syslog. Syslog is both a protocol
and a log format. As a protocol, it
transports and writes logs. As a log format, it
contains a header, followed by structured-data,
and a message. The Syslog entry includes
three sections: a header, structured-data,
and a message. The header contains data
fields like Timestamp, the Hostname, the
Application name, and the Message ID. The structured-data
portion contains additional data information
in key-value pairs. Here, the eventSource
is a key that specifies the data
source of the log, which is the value Application. Lastly, the message
component contains the detailed log message
about the event. In this example, “This is a
log entry!” is the message. Let’s explore another
common log format you might encounter as
a security analyst. JavaScript Object Notation,
more popularly known as JSON, is a text-based format designed to be easy
to read and write. It also uses key-value
pairs to structure data. Here’s an example of a JSON log. The curly brackets represent the beginning and
end of an object. The object is the data that’s enclosed between the brackets. It’s organized using
key-value pairs where each key has a corresponding
value separated by colons. For example, for the first line, the key is Alert and
the value is Malware. JSON is known for its simplicity
and easy readability. As a security analyst, you’ll use JSON to read
and write data like logs. eXtensible Markup
Language, or XML, is a language and a format used for storing and
transmitting data. Instead of key-value pairs, it uses tags and other
keys to structure data. Here, we have an example of an XML log entry
with four fields: firstName, lastName, employeeID, and dateJoined, which are separated with arrows. Finally, Comma Separated
Values, or CSV, is a format that
uses separators like commas to separate data values. In this example, there are many different data fields which are separated with commas. Now that you know about the
diversity of log formats, you can focus on evaluating logs to build context
around a detection. Coming up, you’ll explore how IDS signatures are
used to detect, log, and alert on
suspicious activity.
Reading: Overview of log file formats
Reading
You’ve learned about how logs record events that happen on a network, or system. In security, logs provide key details about activities that occurred across an organization, like who signed into an application at a specific point in time. As a security analyst, you’ll use log analysis, which is the process of examining logs to identify events of interest. It’s important to know how to read and interpret different log formats so that you can uncover the key details surrounding an event and identify unusual or malicious activity. In this reading, you’ll review the following log formats:
- JSON
- Syslog
- XML
- CSV
- CEF
JavaScript Object Notation (JSON)
JavaScript Object Notation (JSON) is a file format that is used to store and transmit data. JSON is known for being lightweight and easy to read and write. It is used for transmitting data in web technologies and is also commonly used in cloud environments. JSON syntax is derived from JavaScript syntax. If you are familiar with JavaScript, you might recognize that JSON contains components from JavaScript including:
- Key-value pairs
- Commas
- Double quotes
- Curly brackets
- Square brackets
Key-value pairs
A key-value pair is a set of data that represents two linked items: a key and its corresponding value. A key-value pair consists of a key followed by a colon, and then followed by a value. An example of a key-value pair is “Alert”: “Malware”.
Note: For readability, it is recommended that key-value pairs contain a space before or after the colon that separates the key and value.
Commas
Commas are used to separate data. For example: “Alert”: “Malware”, “Alert code”: 1090, “severity”: 10.
Double quotes
Double quotes are used to enclose text data, which is also known as a string, for example: “Alert”: “Malware”. Data that contains numbers is not enclosed in quotes, like this: “Alert code”: 1090.
Curly brackets
Curly brackets enclose an object, which is a data type that stores data in a comma-separated list of key-value pairs. Objects are often used to describe multiple properties for a given key. JSON log entries start and end with a curly bracket. In this example, User is the object that contains multiple properties:
“User” { “id”: “1234”, “name”: “user”, “role”: “engineer” }
Square brackets
Square brackets are used to enclose an array, which is a data type that stores data in a comma-separated ordered list. Arrays are useful when you want to store data as an ordered collection, for example: [“Administrators”, “Users”, “Engineering”].
Syslog
Syslog is a standard for logging and transmitting data. It can be used to refer to any of its three different capabilities:
- Protocol: The syslog protocol is used to transport logs to a centralized log server for log management. It uses port 514 for plaintext logs and port 6514 for encrypted logs.
- Service: The syslog service acts as a log forwarding service that consolidates logs from multiple sources into a single location. The service works by receiving and then forwarding any syslog log entries to a remote server.
- Log format: The syslog log format is one of the most commonly used log formats that you will be focusing on. It is the native logging format used in Unix® systems. It consists of three components: a header, structured-data, and a message.
Syslog log example
Here is an example of a syslog entry that contains all three components: a header, followed by structured-data, and a message:
<236>1 2022-03-21T01:11:11.003Z virtual.machine.com evntslog – ID01 [user@32473 iut=”1″ eventSource=”Application” eventID=”9999″] This is a log entry!
Header
The header contains details like the timestamp; the hostname, which is the name of the machine that sends the log; the application name; and the message ID.
- Timestamp: The timestamp in this example is 2022-03-21T01:11:11.003Z, where 2022-03-21 is the date in YYYY-MM-DD format. T is used to separate the date and the time. 01:11:11.003 is the 24-hour format of the time and includes the number of milliseconds 003. Z indicates the timezone, which is Coordinated Universal Time (UTC).
- Hostname: virtual.machine.com
- Application: evntslog
- Message ID: ID01
Structured-data
The structured-data portion of the log entry contains additional logging information. This information is enclosed in square brackets and structured in key-value pairs. Here, there are three keys with corresponding values: [user@32473 iut=”1″ eventSource=”Application” eventID=”9999″].
Message
The message contains a detailed log message about the event. Here, the message is This is a log entry!.
Priority (PRI)
The priority (PRI) field indicates the urgency of the logged event and is contained with angle brackets. In this example, the priority value is <236> . Generally, the lower the priority level, the more urgent the event is.
Note: Syslog headers can be combined with JSON, and XML formats. Custom log formats also exist.
XML (eXtensible Markup Language)
XML (eXtensible Markup Language) is a language and a format used for storing and transmitting data. XML is a native file format used in Windows systems. XML syntax uses the following:
- Tags
- Elements
- Attributes
Tags
XML uses tags to store and identify data. Tags are pairs that must contain a start tag and an end tag. The start tag encloses data with angle brackets, for example <tag>, whereas the end of a tag encloses data with angle brackets and a forward slash like this: </tag>.
Elements
XML elements include both the data contained inside of a tag and the tags itself. All XML entries must contain at least one root element. Root elements contain other elements that sit underneath them, known as child elements.
Here is an example:
<Event> <EventID>4688</EventID> <Version>5</Version> </Event>
In this example, <Event> is the root element and contains two child elements <EventID> and <Version>. There is data contained in each respective child element.
Attributes
XML elements can also contain attributes. Attributes are used to provide additional information about elements. Attributes are included as the second part of the tag itself and must always be quoted using either single or double quotes.
For example:
<EventData> <Data Name=’SubjectUserSid’>S-2-3-11-160321</Data> <Data Name=’SubjectUserName’>JSMITH</Data> <Data Name=’SubjectDomainName’>ADCOMP</Data> <Data Name=’SubjectLogonId’>0x1cf1c12</Data> <Data Name=’NewProcessId’>0x1404</Data> </EventData>
In the first line for this example, the tag is <Data> and it uses the attribute Name=’SubjectUserSid’ to describe the data enclosed in the tag S-2-3-11-160321.
CSV (Comma Separated Value)
CSV (Comma Separated Value) uses commas to separate data values. In CSV logs, the position of the data corresponds to its field name, but the field names themselves might not be included in the log. It’s critical to understand what fields the source device (like an IPS, firewall, scanner, etc.) is including in the log.
Here is an example:
2009-11-24T21:27:09.534255,ALERT,192.168.2.7, 1041,x.x.250.50,80,TCP,ALLOWED,1:2001999:9,”ET MALWARE BTGrab.com Spyware Downloading Ads”,1
CEF (Common Event Format)
Common Event Format (CEF) is a log format that uses key-value pairs to structure data and identify fields and their corresponding values. The CEF syntax is defined as containing the following fields:
CEF:Version|Device Vendor|Device Product|Device Version|Signature ID|Name|Severity|Extension
Fields are all separated with a pipe character |. However, anything in the Extension part of the CEF log entry must be written in a key-value format. Syslog is a common method used to transport logs like CEF. When Syslog is used a timestamp and hostname will be prepended to the CEF message. Here is an example of a CEF log entry that details malicious activity relating to a worm infection:
Sep 29 08:26:10 host CEF:1|Security|threatmanager|1.0|100|worm successfully stopped|10|src=10.0.0.2 dst=2.1.2.2 spt=1232
Here is a breakdown of the fields:
- Syslog Timestamp: Sep 29 08:26:10
- Syslog Hostname: host
- Version: CEF:1
- Device Vendor: Security
- Device Product: threatmanager
- Device Version: 1.0
- Signature ID: 100
- Name: worm successfully stopped
- Severity: 10
- Extension: This field contains data written as key-value pairs. There are two IP addresses, src=10.0.0.2 and dst=2.1.2.2, and a source port number spt=1232. Extensions are not required and are optional to add.
This log entry contains details about a Security application called threatmanager that successfully stopped a worm from spreading from the internal network at 10.0.0.2 to the external network 2.1.2.2 through the port 1232. A high severity level of 10 is reported.
Note: Extensions and syslog prefix are optional to add to a CEF log.
Key takeaways
There is no standard format used in logging, and many different log formats exist. As a security analyst, you will analyze logs that originate from different sources. Knowing how to interpret different log formats will help you determine key information that you can use to support your investigations.
Resources for more information
- To learn more about the syslog protocol including priority levels, check out The Syslog Protocol.
- If you would like to explore generating log formats, check out this open-source test data generator tool.
- To learn more about timestamp formats, check out Date and Time on the Internet: Timestamps
Practice Quiz: Test your knowledge: Log components and formats
Examine the following authentication log:
[2022/12/20 08:20:38.921286] User nuhara logged in successfully
What type of information does this log contain? Select two answers.
Timestamp, Event description
AI
The correct answers are:
- Timestamp: The log entry begins with
[2022/12/20 08:20:38.921286]
, indicating the exact date and time the event occurred. - Event description: The remaining part of the log,
User nuhara logged in successfully
, describes the specific event that took place, which is a successful login by the user “nuhara”.
Explanation of incorrect options:
- Syslog: Syslog is a protocol and log format commonly used for transmitting and storing logs. While this log entry could be in Syslog format, the information provided does not explicitly confirm or deny it.
- Message ID: A message ID is a unique identifier often assigned to log entries for tracking and correlation purposes. This log entry does not contain a visible message ID.
Which of the following capabilities can syslog be used for? Select three answers.
Log format, Service, Protocol
AI
The correct answers are:
- Log format: Syslog defines a standard format for structuring log messages, ensuring consistency and readability across different systems and devices.
- Protocol: Syslog also functions as a protocol for transmitting log messages over a network, enabling centralized log collection and analysis.
- Service: Syslog can be implemented as a service on devices and systems, providing a mechanism for generating, storing, and forwarding log messages.
Explanation of incorrect option:
- Extension: Syslog is not typically referred to as an extension. It exists as a standalone protocol and format, although it can be integrated with other tools and systems for log management.
What are examples of log formats? Select three answers.
Common Event Format (CEF), eXtensible Markup Language (XML), JavaScript Object Notation (JSON)
AI
The correct answers are:
- Common Event Format (CEF): An industry-standard format for security event logs, designed for efficient sharing and analysis of security information.
- eXtensible Markup Language (XML): A structured data format using tags and elements, often used for storing and transmitting logs due to its flexibility and readability.
- JavaScript Object Notation (JSON): A lightweight, human-readable format based on key-value pairs, commonly used for log data due to its simplicity and ease of parsing.
Explanation of incorrect option:
- Gramm-Leach-Bliley Act (GLBA): This is a US federal law regulating the financial industry, not a log format.
Which log format uses tags to structure data?
eXtensible Markup Language (XML)
XML is a log format that uses tags and other keys to structure data.
Overview of intrusion detection systems (IDS)
Video: Security monitoring with detection tools
This passage discusses different ways to monitor systems and networks for security threats. It focuses on two main technologies:
1. Intrusion detection systems (IDS):
- Monitor activity for suspicious behavior and generate alerts.
- Two types:
- Host-based: monitors a single device (laptop, server etc.)
- Network-based: monitors network traffic at specific points.
- Use signature analysis to match activity against predefined rules of known threats.
- Generate logs for further analysis.
2. Telemetry:
- Collection and transmission of data for analysis.
- Examples: logs, packet captures.
- Source of evidence for security investigations.
Overall, the passage emphasizes the importance of monitoring systems and networks for threats using tools like IDS and understanding the data they generate (logs and telemetry) for effective security.
Securing your systems and networks against ever-evolving threats requires constant vigilance. This is where security monitoring comes into play. By implementing detection tools and techniques, you gain the ability to proactively identify suspicious activity and respond to incidents before they cause significant damage.
This tutorial will serve as your roadmap to understanding and implementing security monitoring with detection tools. We’ll delve into the fundamentals, explore different types of tools, and equip you with the knowledge to build a robust security posture.
1. Understanding the Landscape:
Before diving into tools, let’s establish a common ground.
- Data is king: Effective detection relies heavily on data collected from various sources like logs, network traffic, and endpoint activity. Understanding how these data points are generated and collected is crucial.
- Telemetry vs. Logs: Telemetry refers to the ongoing collection and transmission of data, while logs capture specific events that have occurred. Both are valuable sources of information for security monitoring.
2. Introducing the Watchdogs:
Now, let’s meet the key players in security monitoring:
- Intrusion Detection Systems (IDS): These systems continuously monitor activity for suspicious patterns and generate alerts. They come in two flavors:
- Host-based IDS: Monitors a single device for malicious activity.
- Network-based IDS: Monitors network traffic for suspicious patterns.
- Security Information and Event Management (SIEM): A central hub that collects, analyzes, and correlates data from various security tools, including IDS and logs, providing a holistic view of your security posture.
3. Decoding the Threats:
Detection tools rely on various techniques to identify suspicious activity, including:
- Signature-based detection: Matches monitored activity against predefined patterns of known threats.
- Anomaly-based detection: Identifies activity that deviates from established baselines of normal behavior.
- Behavioral analysis: Monitors user and system behavior for suspicious patterns, such as unusual login attempts or data exfiltration.
4. Building Your Arsenal:
Now that you understand the basics, let’s explore some popular tools you can leverage:
- Open-source IDS: Snort, Suricata, and Zeek are powerful open-source options for network-based intrusion detection.
- Commercial IDS: Security vendors like Palo Alto Networks, MacAfee, and Cisco offer comprehensive IDS solutions with advanced features.
- SIEM platforms: Splunk, Elastic SIEM, and LogRhythm are leading SIEM platforms that provide centralized log management and analysis capabilities.
5. Putting it all Together:
Implementing security monitoring is an ongoing process. Here’s a roadmap to get you started:
- Define your goals: What are you trying to achieve with security monitoring? Identify your critical assets and prioritize threats accordingly.
- Select your tools: Choose tools that align with your needs, budget, and technical expertise.
- Configure and deploy: Set up your chosen tools and configure them to monitor relevant data sources.
- Refine and adapt: Regularly analyze alerts, refine your detection rules, and adapt your monitoring strategy as threats evolve.
Remember: Security monitoring is not a one-time fix. It’s a continuous process that requires dedication, expertise, and adaptation. By understanding the fundamentals, choosing the right tools, and constantly refining your approach, you can build a robust security posture that effectively safeguards your systems and networks.
This is just a starting point. As you delve deeper, you’ll discover a vast array of tools, techniques, and best practices to explore. Remember, the key is to stay informed, adapt to the changing threat landscape, and continuously improve your security posture.
Feel free to ask any questions you have along the way. Happy monitoring!
When monitoring activity, what specifies the rules used by an intrusion detection system (IDS)?
A signature
A signature specifies the rules that an IDS uses to monitor activity. Signature analysis is one of the most common methods of detection used by IDS tools.
Detection requires data, and this data can come from
various data sources. You’ve already explored how different devices produce logs. Now we’ll examine how different
detection technologies monitor devices and log different types
of system activity, like network and
endpoint telemetry. Telemetry is the collection and transmission of
data for analysis. While logs record events
occurring on systems, telemetry describes
the data itself. For example, packet captures are considered
network telemetry. For security professionals, logs and telemetry
are sources of evidence that can be used to answer questions
during investigations. Previously, you learned about an intrusion detection
system, or IDS. Remember that IDS is
an application that monitors activity and alerts
on possible intrusions. This includes monitoring
different parts of a system or network
like an endpoint. An endpoint is any device connected on a network,
such as a laptop, tablet, desktop computer,
or a smartphone. Endpoints are entry points into a network, which makes
them key targets for malicious actors
looking to gain unauthorized access
into a system. To monitor endpoints
for threats or attacks, a host-based intrusion
detection system can be used. It’s an application
that monitors the activity of the host
on which it’s installed. To clarify, a host
is any device that communicates with other
devices on a network, similar to an endpoint. Host-based intrusion
detection systems are installed as an
agent on a single host, such as a laptop
computer or a server. Depending on its configuration, host-based intrusion
detection systems will monitor the host on which it’s installed to
detect suspicious activity. Once something has
been detected, it records output as logs
and an alert gets generated. What if we wanted to
monitor a network? A network-based intrusion
detection system collects and analyzes network
traffic and network data. Network-based intrusion
detection systems work similar to packet sniffers because they analyze
network traffic and network data on a specific
point in the network. It’s common to deploy
multiple IDS sensors at different points in the network to achieve adequate visibility. When suspicious or unusual
network activity is detected, the network-based
intrusion detection system logs it and generates an alert. In this example, the network-based intrusion
detection system is monitoring the traffic
that’s both coming from and going to the internet. Intrusion detection systems use different types of
detection methods. One of the most common methods
is signature analysis. Signature analysis is
a detection method used to find events
of interest. A signature specifies
a set of rules that an IDS refers to when it
monitors activity. If the activity matches the
rules in the signature, the IDS logs it and
sends out an alert. For example, a signature can be written to generate an alert if a failed login on a system
happens three times in a row, which suggests a possible
password attack. Before alerts are generated, the activity must be logged. IDS technologies record the
information of the devices, systems, and networks which
they monitor as IDS logs. IDS logs can then
be sent, stored, and analyzed in a centralized
log repository like a SIEM. Coming up, we’ll
explore how to read and configure signatures.
Meet you there!
Reading: Detection tools and techniques
Reading
In this reading, you’ll examine the different types of intrusion detection system (IDS) technologies and the alerts they produce. You’ll also explore the two common detection techniques used by detection systems. Understanding the capabilities and limitations of IDS technologies and their detection techniques will help you interpret security information to identify, analyze, and respond to security events.
As you’ve learned, an intrusion detection system (IDS) is an application that monitors system activity and alerts on possible intrusions. IDS technologies help organizations monitor the activity that happens on their systems and networks to identify indications of malicious activity. Depending on the location you choose to set up an IDS, it can be either host-based or network-based.
Host-based intrusion detection system
A host-based intrusion detection system (HIDS) is an application that monitors the activity of the host on which it’s installed. A HIDS is installed as an agent on a host. A host is also known as an endpoint, which is any device connected to a network like a computer or a server.
Typically, HIDS agents are installed on all endpoints and used to monitor and detect security threats. A HIDS monitors internal activity happening on the host to identify any unauthorized or abnormal behavior. If anything unusual is detected, such as the installation of an unauthorized application, the HIDS logs it and sends out an alert.
In addition to monitoring inbound and outbound traffic flows, HIDS can have additional capabilities, such as monitoring file systems, system resource usage, user activity, and more.
This diagram shows a HIDS tool installed on a computer. The dotted circle around the host indicates that it is only monitoring the local activity on the single computer on which it’s installed.
Network-based intrusion detection system
A network-based intrusion detection system (NIDS) is an application that collects and monitors network traffic and network data. NIDS software is installed on devices located at specific parts of the network that you want to monitor. The NIDS application inspects network traffic from different devices on the network. If any malicious network traffic is detected, the NIDS logs it and generates an alert.
This diagram shows a NIDS that is installed on a network. The highlighted circle around the server and computers indicates that the NIDS is installed on the server and is monitoring the activity of the computers.
Using a combination of HIDS and NIDS to monitor an environment can provide a multi-layered approach to intrusion detection and response. HIDS and NIDS tools provide a different perspective on the activity occurring on a network and the individual hosts that are connected to it. This helps provide a comprehensive view of the activity happening in an environment.
Detection techniques
Detection systems can use different techniques to detect threats and attacks. The two types of detection techniques that are commonly used by IDS technologies are signature-based analysis and anomaly-based analysis.
Signature-based analysis
Signature analysis, or signature-based analysis, is a detection method that is used to find events of interest. A signature is a pattern that is associated with malicious activity. Signatures can contain specific patterns like a sequence of binary numbers, bytes, or even specific data like an IP address.
Previously, you explored the Pyramid of Pain, which is a concept that prioritizes the different types of indicators of compromise (IoCs) associated with an attack or threat, such as IP addresses, tools, tactics, techniques, and more. IoCs and other indicators of attack can be useful for creating targeted signatures to detect and block attacks.
Different types of signatures can be used depending on which type of threat or attack you want to detect. For example, an anti-malware signature contains patterns associated with malware. This can include malicious scripts that are used by the malware. IDS tools will monitor an environment for events that match the patterns defined in this malware signature. If an event matches the signature, the event gets logged and an alert is generated.
Advantages
- Low rate of false positives: Signature-based analysis is very efficient at detecting known threats because it is simply comparing activity to signatures. This leads to fewer false positives. Remember that a false positive is an alert that incorrectly detects the presence of a threat.
Disadvantages
- Signatures can be evaded: Signatures are unique, and attackers can modify their attack behaviors to bypass the signatures. For example, attackers can make slight modifications to malware code to alter its signature and avoid detection.
- Signatures require updates: Signature-based analysis relies on a database of signatures to detect threats. Each time a new exploit or attack is discovered, new signatures must be created and added to the signature database.
- Inability to detect unknown threats: Signature-based analysis relies on detecting known threats through signatures. Unknown threats can’t be detected, such as new malware families or zero-day attacks, which are exploits that were previously unknown.
Anomaly-based analysis
Anomaly-based analysis is a detection method that identifies abnormal behavior. There are two phases to anomaly-based analysis: a training phase and a detection phase. In the training phase, a baseline of normal or expected behavior must be established. Baselines are developed by collecting data that corresponds to normal system behavior. In the detection phase, the current system activity is compared against this baseline. Activity that happens outside of the baseline gets logged, and an alert is generated.
Advantages
- Ability to detect new and evolving threats: Unlike signature-based analysis, which uses known patterns to detect threats, anomaly-based analysis can detect unknown threats.
Disadvantages
- High rate of false positives: Any behavior that deviates from the baseline can be flagged as abnormal, including non-malicious behaviors. This leads to a high rate of false positives.
- Pre-existing compromise: The existence of an attacker during the training phase will include malicious behavior in the baseline. This can lead to missing a pre-existing attacker.
Key takeaways
IDS technologies are an essential security tool that you will encounter in your security journey. To recap, a NIDS monitors an entire network, whereas a HIDS monitors individual endpoints. IDS technologies generate different types of alerts. Lastly, IDS technologies use different detection techniques like signature-based or anomaly-based analysis to identify malicious activity.
Video: Grace: Security mindset in detection and response
- Grace’s job is to detect hackers trying to hack Google. This protects people who trust Google with their data, such as journalists and activists.
- Security mindset is about curiosity and understanding how hackers and defenders think. It’s also about empathizing with how information can be attained from unusual sources.
- One example of a crazy way to get information is by analyzing CPU activity. Harder tasks require more energy, making the CPU hotter and causing it to execute more functions. This information can be used to deduce what’s happening on the system.
- To develop a security mindset, Grace recommends listening to stories from hackers and defenders, reading news articles about cyber threats, attending conferences and meetups, and practicing with others.
- Don’t give up when you have roadblocks! Studying for certifications is worth it, even if it gets hard. Be patient and gentle with yourself.
Additional takeaways:
- Security is a challenging but rewarding field.
- There are many resources available to help you learn more about cybersecurity.
- It’s important to be curious and have a strong work ethic.
[MUSIC] Hi, I’m Grace, and I work in Detection and
Response at Google. When I tell people what I do,
they think it’s awesome, I love being able to say, my job is to
detect hackers trying to hack Google. There are people who trust us with their
data that play critical roles in society, like journalists and
activists, for example. So they need to be able to
have their data with us and trust that it’s going to be safe. Security mindset is about curiosity. There’s a really nice overlap between
cybersecurity and computers and having that creative and logical outlet
and an interest in big world matters. What hackers are thinking,
what defenders are thinking. I’m empathizing with people
looking at how information can be attained,
perhaps sometimes from unusual sources. An example of one of the craziest things
that I’ve learned about would be how people can get information from a CPU. Some tasks for
a CPU are more difficult than others, require more energy to do multiplying
numbers as an example of that, which means that the CPU
is going to work harder, it’s going to get hotter,
it’s going to be executing more functions. So you can use that information to know
things about what that CPU is doing. From there, you can start to deduce what’s
happening at a given point in time. What I recommend to people who
are interested in developing a security mindset is listen to stories. There are podcasts that have
great interviews with hackers. I recommend following the news and
reading news articles about different cyber threats that
are happening on in the world. I recommend going to conferences,
go to meetups, finding people that you can study with and
practice with. Even hackers are teaching each other how
to hack things in forums and chat rooms. It’s not cheating to ask for help. Another piece of advice that I have for people would be to not give
up when you have roadblocks. Studying the certificate
is a really good idea, and it’s really worth persevering
right to the very end. Even when it gets hard and you start
feeling overwhelmed, that’s okay, they’re new terms. I can guarantee that if you come back
to it later, you’ll be more familiar. You’ll find it easier. Being really gentle with yourself and
understanding and patient will help a lot when
you’re facing these challenges. [SOUND]
Video: Components of a detection signature
As a security analyst, crafting and interpreting signatures is crucial for effectively detecting network intrusions. This skill is like learning a new language, where each element plays a specific role in identifying suspicious activity. In this section, we’ll dive into the basic syntax of signatures, equipping you with the ability to read and understand these critical detection rules.
Components of a Signature:
Imagine a signature as a detective’s profile of a suspect. Just like any good profile, it needs specific details to accurately identify the target. In the world of network security, these details come in the form of three key components:
- Action: This tells the IDS what to do if the rule criteria are met. Think of it as the detective’s next move upon recognizing the suspect. Common actions include:
- Alert: Raise the alarm and notify the security team.
- Pass: Allow the traffic to proceed if it’s deemed safe.
- Reject: Block the traffic to prevent potential harm.
- Header: This defines the network traffic the signature targets. It’s like the suspect’s description – their IP address, the ports they frequent, and the protocols they use. Here’s what you might find in a header:
- Protocol: The communication language used, like TCP or UDP.
- IP Addresses: Source and destination addresses of the traffic.
- Ports: The specific channels used for communication.
- Direction: Whether the traffic is incoming or outgoing.
- Rule Options: These fine-tune the signature, adding extra details to narrow down the target. Think of it as the detective gathering additional clues, like the suspect’s clothing or carrying items. Some common options include:
- Content Matching: Searching for specific patterns within the network packets, like malicious payloads.
- Time Constraints: Specifying when the rule is active, for example, only during business hours.
Putting it Together:
Let’s break down an example signature to see how these components work in action:
tcp 10.120.170.17 any -> 133.113.202.181 80 (msg;"This is a message."; sid:1000; rev:1)
Here’s the breakdown:
- Action:
alert
(not shown explicitly here) - Header:
- Protocol:
tcp
- Source IP:
10.120.170.17
- Source Port:
any
(meaning any port on the source IP) - Arrow:
->
indicating direction (traffic originating from the source) - Destination IP:
133.113.202.181
- Destination Port:
80
(commonly used for HTTP traffic)
- Protocol:
- Rule Options:
msg;"This is a message."
: Alert message to be displayed.sid:1000
: Unique identifier for the signature.rev:1
: Indicates the first version of the signature.
By understanding these components and how they work together, you can effectively read and interpret signatures, empowering you to identify potential threats lurking in your network traffic. Remember, this is just the beginning of your journey as a security analyst. Stay tuned for the next section where we’ll explore the tools that utilize these signatures to keep your network safe!
Introduction
In the realm of cybersecurity, detection signatures play a pivotal role in identifying and flagging suspicious activity within networks and systems. They act as blueprints that guide security tools in recognizing potential threats. Understanding the components of a detection signature is crucial for effectively configuring and utilizing these tools. This tutorial will delve into the key elements of signatures and how they work together to safeguard your digital assets.
Key Components
1. Action:
- Determines the action to be taken when a rule is triggered.
- Common actions include:
- Alert: Generate a notification to security personnel.
- Log: Record the event for further analysis.
- Block: Prevent the suspicious traffic from proceeding.
2. Header:
- Defines the specific characteristics of the network traffic to be scrutinized.
- Typically includes:
- Protocol: The communication language used (e.g., TCP, UDP, ICMP).
- IP Addresses: Source and destination IP addresses.
- Ports: Specific communication channels.
- Direction: Incoming or outgoing traffic.
3. Rule Options:
- Offer customization and fine-tuning of the signature’s detection capabilities.
- Examples include:
- Content Matching: Seeking specific patterns within network packets (e.g., malicious code).
- Time Constraints: Activating or deactivating rules based on time frames.
- Message: Customizing the alert message to provide relevant details.
4. Meta-Information:
- Provides additional context for signature identification and management.
- Examples include:
- Signature ID (SID): Unique identifier for each signature.
- Revision Number: Indicates updates or modifications to the signature.
- Author: Creator of the signature.
Example Signature:
alert tcp 192.168.1.100 any -> 192.168.1.200 80 (content:”malware_pattern”; msg:”Potential malware detected”; sid:1234; rev:2)
Breakdown:
- Action:
alert
- Header:
- Protocol:
tcp
- Source IP:
192.168.1.100
- Source Port:
any
- Direction:
->
(outgoing traffic) - Destination IP:
192.168.1.200
- Destination Port:
80
(HTTP)
- Protocol:
- Rule Options:
content:"malware_pattern"
: Seeks content matching the specified pattern.msg:"Potential malware detected"
: Alert message to be displayed.sid:1234
: Unique signature ID.rev:2
: Second version of the signature.
Conclusion
Understanding these core components empowers you to effectively interpret, configure, and even create detection signatures to enhance your security posture. By mastering signatures, you’ll be better equipped to identify and mitigate potential threats within your networks, ensuring the safety and integrity of your digital assets.
As a security analyst, you may be tasked with writing, customizing, or
testing signatures. To do this, you’ll
use IDS tools. So in this section, we’ll examine signature syntax and by the end, you’ll be able to
read a signature. A signature specifies
detection rules. These rules outline
the types of network intrusions you
want an IDS to detect. For example, a signature can be written to detect and alert on suspicious traffic attempting
to connect to a port. Rule language
differs depending on different network intrusion
detection systems. The term network intrusion
detection system is often abbreviated as the acronym N-I-D-S
and pronounced NIDS. Generally, NIDS rules
consists of three components: an action, a header,
and rule options. Now, let’s examine each of these three components
in more detail. Typically, the action is the first item specified
in a signature. This determines the
action to take if the rule criteria
matches are met. Actions differ across
NIDS rule languages, but some common actions are:
alert, pass, or reject. Using our example, if a
rule specifies to alert on suspicious network traffic that establishes an unusual
connection to a port, the IDS will inspect the traffic packets
and send out an alert. The header defines the
signature’s network traffic. These include
information such as source and destination
IP addresses, source and destination ports, protocols, and
traffic direction. If we want to
detect an alert on suspicious traffic
connecting to a port, we have to first
define the source of the suspicious
traffic in the header. Suspicious traffic
can originate from IP addresses outside
the local network. It can also use specific
or unusual protocols. We can specify
external IP addresses and these protocols
in the header. Here’s an example of how header information may
appear in a basic rule. First, we can observe
that the protocol, TCP, is the first listed
item in the signature. Next, the source IP address 10.120.170.17 and the
source port number is specified as being any. The arrow in the middle
of the signature indicates the direction
of the network traffic. So we know it’s originating
from the source IP 10.120.170.17 from any port going to the following
destination IP address 133.113.202.181
and destination port 80. The rule options
lets you customize signatures with
additional parameters. There are many different
options available to use. For instance, you can
set options to match the content of a network packet to detect malicious payloads. Malicious payloads reside in
a packet’s data and perform malicious activity like
deleting or encrypting data. Configuring rule options helps in narrowing down
network traffic, so you can find exactly
what you’re looking for. Typically, rule options
are separated by semi-colons and enclosed
in parentheses. In this example, we can examine that the rule options
are enclosed in a pair of parentheses and are also separated with semi-colons. The first rule option, msg, which stands for message, provides the alert’s text. In this case, the alert
will print out the text: “This is a message.” There’s also the option sid, which stands for signature ID. This attaches a unique
id to each signature. The rev option
stands for revision. Each time a signature
is updated or changed, the revision number changes. Here, the number 1 means it’s the first version
of the signature. Great! Now you’ve
developed another skill in your journey towards
becoming a security analyst: how to read signatures. There’s so much more to
learn and coming up, we’ll discuss tools
that use signatures.
Video: Examine signatures with Suricata
This passage focuses on analyzing a pre-written signature in Suricata, an open-source IDS tool. Here are the key points:
- Location:
etc/suricata/rules/custom.rules
file contains custom signatures. - Signature Structure:
- Action:
alert
– triggers an alert if all conditions met. - Header:
- Protocol:
http
- Source:
HOME_NET
(any port) - Destination:
EXTERNAL_NET
(any port) - Direction: Network traffic leaving the home network.
- Protocol:
- Rule Options:
message
: Shows “GET on wire” on alert.flow
: Matches “established” connections.content
: Matches network packets containing the text “GET”.
- Action:
- Function: Alerts when Suricata sees an HTTP “GET” request leaving the home network.
- Customization: Security analysts can test, modify, or create IDS signatures to fit specific environments and reduce false positives.
This example demonstrates how signature-based analysis uses pre-written templates and additional options to detect specific network behavior. Remember, tailoring signatures to your environment is crucial for effective intrusion detection.
Examine Signatures with Suricata: A Hands-on Tutorial
In the realm of cybersecurity, understanding and analyzing signatures is fundamental for effectively deploying intrusion detection systems like Suricata. This tutorial will guide you through examining pre-written and custom signatures within Suricata, equipping you with the skills to interpret their function and tailor them to your specific needs.
Materials:
- A Linux machine with Suricata installed
- Basic familiarity with the command line
Steps:
- Locate Signature Files:
Navigate to the Suricata configuration directory:
cd /etc/suricata/
Locate the rules
folder, where pre-written and custom signatures reside:
cd rules
- Explore Pre-written Signatures:
List the contents of the folder to identify different rule sets:
ls
For instance, you might find pre-written rules for protocols like HTTP, DNS, and SMB. Choose a relevant protocol folder, such as http
.
- Examine a Specific Signature:
Select a signature file, for example, http.rules
, and open it using a text editor like nano
:
nano http.rules
- Deconstruct the Signature:
Each signature line corresponds to specific detection criteria. Let’s break down the key components:
– Action: Specifies what Suricata does when the rule matches, e.g., alert
, drop
, or log
. – Header: Defines the network traffic characteristics, including: * Protocol (e.g., http
) * Source and destination IP addresses or networks * Source and destination ports * Traffic direction (inbound/outbound) – Rule Options: Refine the signature match with additional conditions, such as: * content
: Search for specific patterns within packets (e.g., malicious payloads) * time
: Limit rule activation to specific periods * sid
: Unique identifier for the signature * rev
: Revision number indicating updates
- Analyze a Signature Example:
Consider the following line from the http.rules
file:
alert tcp HOME_NET any -> EXTERNAL_NET any (msg:”HTTP GET on wire”; flow:established; content:”GET”; sid:1000; rev:1)
Breakdown:
- Action:
alert
triggers an alert when the rule matches. - Header:
- Protocol:
tcp
- Source:
HOME_NET
(any port) - Destination:
EXTERNAL_NET
(any port) - Direction: Traffic leaving the home network.
- Protocol:
- Rule Options:
msg
: Displays “HTTP GET on wire” on alert.flow
: Matches only established connections.content
: Looks for the text “GET” in network packets.sid
: Unique identifier1000
.rev
: First version of the signature.
This signature alerts when Suricata detects an HTTP “GET” request leaving the home network.
- Modify and Test Signatures (Optional):
With a grasp of signature composition, you can modify existing rules or create new ones to address specific threats in your environment. Remember to thoroughly test any modifications to avoid false positives and ensure effective detection.
Bonus:
- Explore other rule sets like
dns.rules
orsmb.rules
to understand how Suricata detects threats in different protocols. - Use the
suricata -C rules/suricata.rules
command to check rule syntax and identify potential errors.
By mastering the art of examining and adapting signatures, you can unleash the full potential of Suricata and keep your network safe from lurking threats. Remember, knowledge is power – keep exploring and learning to stay ahead of the curve in the ever-evolving cybersecurity landscape!
Which of the following is true when it comes to analyzing Suricata signatures?
The first field specifies the action.
In a Suricata signature, the first field specifies the action.
Previously, you learned about signature-based
analysis. You also learned how to read signatures used in network-based
intrusion detection systems. Here, we’ll use an open source signature-based
IDS called Suricata to examine a signature. Many NIDS technologies come with
pre-written signatures. You can think of these signatures
as customizable templates. Sort of like different templates
available in a word processor. These signature templates provide you
with a starting point for writing and defining your rules. You can also write and add your own rules. Let’s examine a pre-written
signature through Suricata. On this Linux machine running Ubuntu,
Suricata is already installed. Let’s examine some of its files by
changing directories to the etc directory and into the suricata directory. This is where all of Suricata’s
configuration files live. Next, we’ll use the ls command to list
the contents of the suricata directory. There’s a couple of different files in
here, but we’ll focus on the rules folder. This is where the pre-written
signatures are. You can also add custom signatures here. We’ll use the cd command followed by
the name of the folder to navigate to that folder. Using the ls command, we can observe that
the folder contains some rule templates for different protocols and services. Let’s examine the custom.rules
file using the less command. As a quick refresher, the less command
returns the content of a file one page at a time which makes it easy to move
forward and backward through the content. We’ll use the arrow key to scroll up. Lines that begin with a pound sign (#)
are comments meant to provide context for those who read them and
are ignored by Suricata. The first line says Custom rules
example for HTTP connection. This tells us that this file contains
custom rules for HTTP connections. We can observe that there’s a signature. The first word specifies
the signature’s ACTION. For this signature, the action is alert. This means that the signature generates an
alert when all of the conditions are met. The next part of
the signature is the HEADER. It specifies the protocol http.
The source IP address is HOME_NET and source port is defined as ANY. The arrow indicates the direction of
traffic coming from the home network and going to the destination IP address
EXTERNAL_NET and ANY destination port. So far, we know that this signature
triggers an alert when it detects any HTTP traffic leaving the home network and
going to the external network. Let’s examine the remainder of
the signature to identify if there’s any additional conditions
the signature looks for. The last part of the signature
includes the RULE OPTIONS. They’re enclosed in parentheses and
separated by semicolons. There’s many options listed here, but
we’ll focus on the message, flow, and content options. The message option will show the message
“GET on wire” once the alert is triggered. The flow option is used to match on
direction of network traffic flow. Here, it’s established. This means that a connection
has been successfully made. The content option inspects
the content of a packet. Here, between the quotation marks,
the text GET is specified. GET is an HTTP request that’s used to
retrieve and request data from a server. This means the signature will match if
a network packet contains the text GET, indicating a request. To summarize, this signature alerts
anytime Suricata observes the text GET in an HTTP connection from the home network,
going to the external network. Every environment is different and
in order for an IDS to be effective,
signatures must be tested and tailored. As a security analyst,
you may test, modify, or create IDS signatures to improve the
detection of threats in an environment and reduce the likelihood of false positives. Coming up,
we’ll examine how Suricata logs events. Meet you there.
Video: Examine Suricata logs
Key Points:
- Suricata uses EVE JSON format for logs, featuring key-value pairs for easier searching and extraction.
- It generates two main log types:
- Alert Logs:
- Contain security-relevant information.
- Triggered by signatures detecting suspicious activity.
- Example: Alert about malware detection.
- Network Telemetry Logs:
- Record general network traffic flows.
- Not always security-specific.
- Example: HTTP request to a website with details like hostname, user agent, and content type.
- Alert Logs:
Understanding these log types is crucial for cybersecurity investigations, as they provide essential data for piecing together the events that occurred on a network.
# Deciphering the Network’s Story: A Guide to Examining Suricata Logs
In the world of cybersecurity, logs are the silent storytellers, revealing crucial insights into the events occurring within your network. Suricata, a powerful intrusion detection system, meticulously documents its findings in log files. This tutorial will equip you with the skills to navigate and interpret these logs, empowering you to uncover potential threats and safeguard your digital assets.
Prerequisites:
- Basic understanding of cybersecurity concepts and Suricata’s role as an IDS.
- Access to a system running Suricata with log files.
- Familiarity with command-line tools or a log analysis software.
Steps:
- Locate the Logs:
- Default location on Linux systems:
/var/log/suricata/
- Configuration file (
suricata.yaml
) might specify a different location.
- Default location on Linux systems:
- Identify Log Types:
- Alert Logs (
eve.json
orfast.log
): Contain security-relevant events triggered by signatures. - Network Telemetry Logs (
http.log
,dns.log
etc.): Record general network traffic flows.
- Alert Logs (
- Choose Your Tool:
- Command-Line Tools:
cat
,grep
,less
,awk
for basic text manipulation and searching. - Log Analysis Software: Dedicated tools for advanced filtering, visualization, and correlation.
- Command-Line Tools:
- Inspect Log Structure:
- EVE JSON format: Key-value pairs for easier extraction.
- Common fields:
timestamp
: Event timeevent_type
: Alert or telemetry log typesrc_ip
,dest_ip
: Source and destination IP addressesprotocol
: Network protocol (TCP, UDP, etc.)alert
: Signature details if applicablehttp
,dns
etc.: Protocol-specific fields for telemetry logs
- Filter and Search:
- Use tools to narrow down relevant events:
- Time range: Focus on specific time periods.
- IP addresses: Investigate specific hosts or networks.
- Signature IDs: Examine alerts triggered by particular rules.
- Use tools to narrow down relevant events:
- Interpret Findings:
- Analyze alert logs for potential threats and prioritize investigation.
- Use network telemetry logs to reconstruct network activity and understand context.
- Correlate logs with other data sources (e.g., firewall logs, system logs) for a comprehensive view.
Additional Tips:
- Regularly Review Logs: Proactive analysis can uncover threats early on.
- Automate Log Analysis: Set up alerts for critical events or use tools for continuous monitoring.
- Contextualize Findings: Consider network topology, asset criticality, and threat intelligence for informed decision-making.
- Tail Logs in Real-Time: Use
tail -f
to monitor logs live for immediate insights. - Utilize Logging Options: Explore Suricata’s configuration for log customization and output tuning.
Remember, effective log analysis is an ongoing process. By mastering the art of examining Suricata logs, you’ll gain invaluable visibility into your network’s activities, enabling you to detect, investigate, and respond to threats swiftly and effectively. Stay vigilant and keep exploring the stories hidden within the logs!
Now let’s examine some logs
generated by Suricata. In Suricata, alerts and events are output
in a format known as EVE JSON. EVE stands for Extensible Event Format and
JSON stands for JavaScript Object Notation. As you previously learned, JSON uses
key-value pairs, which simplifies both searching and
extracting text from log files. Suricata generates two types of log data:
alert logs and network telemetry logs. Alert logs contain information that’s
relevant to security investigations. Usually this is the output of signatures
which have triggered an alert. For example, a signature that detects
suspicious traffic across the network generates an alert log that
captures details of that traffic. While network telemetry logs contain
information about network traffic flows, network telemetry is not always security
relevant, it’s simply recording what’s happening on a network, such as a
connection being made to a specific port. Both of these log types provide
information to build a story during an investigation. Let’s examine an example
of both log types. Here’s an example of an event log. We can tell that this event is an alert
because the event type field says alert. There’s also details about the activity
that was logged including IP addresses and the protocol. There are also details about the signature
itself, such as the message and id. From the signature’s message, it appears that this alert relates
to the detection of malware. Next up, we have an example of
a network telemetry log, which shows us the details of an http
request to a website. The event type field tells
us it’s an http log. There’s details about the request. Under hostname,
there’s the website that was accessed. The user agent is the name of software
that connects you to the website. In this case,
it’s the web browser Mozilla 5.0. And the content type, which is the data
the http request returned. Here it’s specified as HTML text. That sums it up on the different
types of log outputs. In the upcoming activity, you’ll be applying what we just explored
by getting hands-on with Suricata. Have fun!
Reading: Overview of Suricata
Reading
So far, you’ve learned about detection signatures and you were introduced to Suricata, an intrusion detection system (IDS).
In this reading, you’ll explore more about Suricata. You’ll also learn about the value of writing customized signatures and configuration. This is an important skill to build in your cybersecurity career because you might be tasked with deploying and maintaining IDS tools.
Introduction to Suricata
Suricata is an open-source intrusion detection system, intrusion prevention system, and network analysis tool.
Suricata features
There are three main ways Suricata can be used:
- Intrusion detection system (IDS): As a network-based IDS, Suricata can monitor network traffic and alert on suspicious activities and intrusions. Suricata can also be set up as a host-based IDS to monitor the system and network activities of a single host like a computer.
- Intrusion prevention system (IPS): Suricata can also function as an intrusion prevention system (IPS) to detect and block malicious activity and traffic. Running Suricata in IPS mode requires additional configuration such as enabling IPS mode.
- Network security monitoring (NSM): In this mode, Suricata helps keep networks safe by producing and saving relevant network logs. Suricata can analyze live network traffic, existing packet capture files, and create and save full or conditional packet captures. This can be useful for forensics, incident response, and for testing signatures. For example, you can trigger an alert and capture the live network traffic to generate traffic logs, which you can then analyze to refine detection signatures.
Rules
Rules or signatures are used to identify specific patterns, behavior, and conditions of network traffic that might indicate malicious activity. The terms rule and signature are often used interchangeably in Suricata. Security analysts use signatures, or patterns associated with malicious activity, to detect and alert on specific malicious activity. Rules can also be used to provide additional context and visibility into systems and networks, helping to identify potential security threats or vulnerabilities.
Suricata uses signatures analysis, which is a detection method used to find events of interest. Signatures consist of three components:
- Action: The first component of a signature. It describes the action to take if network or system activity matches the signature. Examples include: alert, pass, drop, or reject.
- Header: The header includes network traffic information like source and destination IP addresses, source and destination ports, protocol, and traffic direction.
- Rule options: The rule options provide you with different options to customize signatures.
Here’s an example of a Suricata signature:
Rule options have a specific ordering and changing their order would change the meaning of the rule.
Note: The terms rule and signature are synonymous.
Note: Rule order refers to the order in which rules are evaluated by Suricata. Rules are processed in the order in which they are defined in the configuration file. However, Suricata processes rules in a different default order: pass, drop, reject, and alert. Rule order affects the final verdict of a packet especially when conflicting actions such as a drop rule and an alert rule both match on the same packet.
Custom rules
Although Suricata comes with pre-written rules, it is highly recommended that you modify or customize the existing rules to meet your specific security requirements.
There is no one-size-fits-all approach to creating and modifying rules. This is because each organization’s IT infrastructure differs. Security teams must extensively test and modify detection signatures according to their needs.
Creating custom rules helps to tailor detection and monitoring. Custom rules help to minimize the amount of false positive alerts that security teams receive. It’s important to develop the ability to write effective and customized signatures so that you can fully leverage the power of detection technologies.
Configuration file
Before detection tools are deployed and can begin monitoring systems and networks, you must properly configure their settings so that they know what to do. A configuration file is a file used to configure the settings of an application. Configuration files let you customize exactly how you want your IDS to interact with the rest of your environment.
Suricata’s configuration file is suricata.yaml, which uses the YAML file format for syntax and structure.
Log files
There are two log files that Suricata generates when alerts are triggered:
- eve.json: The eve.json file is the standard Suricata log file. This file contains detailed information and metadata about the events and alerts generated by Suricata stored in JSON format. For example, events in this file contain a unique identifier called flow_id which is used to correlate related logs or alerts to a single network flow, making it easier to analyze network traffic. The eve.json file is used for more detailed analysis and is considered to be a better file format for log parsing and SIEM log ingestion.
- fast.log: The fast.log file is used to record minimal alert information including basic IP address and port details about the network traffic. The fast.log file is used for basic logging and alerting and is considered a legacy file format and is not suitable for incident response or threat hunting tasks.
The main difference between the eve.json file and the fast.log file is the level of detail that is recorded in each. The fast.log file records basic information, whereas the eve.json file contains additional verbose information.
Key takeaways
In this reading, you explored some of Suricata’s features, rules syntax, and the importance of configuration. Understanding how to configure detection technologies and write effective rules will provide you with clear insight into the activity happening in an environment so that you can improve detection capability and network visibility. Go ahead and start practicing using Suricata in the upcoming activity!
Resources for more information
If you would like to learn more about Suricata including rule management and performance, check out the following resources:
Practice Quiz: Test your knowledge: Overview of intrusion detection systems (IDS)
A security analyst uses a network protocol analyzer to capture HTTP traffic to analyze patterns. What type of data are they using?
Network telemetry
They are using network telemetry data. Network telemetry refers to the collection and transmission of network data for analysis, such as HTTP traffic.
AI
The answer is Network telemetry.
Explanation:
- Network telemetry refers to data that captures the flow and characteristics of network traffic, providing insights into network activity and performance.
- Host-based data focuses on information collected from individual devices or hosts, such as logs, system events, and application behavior.
- Signature-based data involves using predefined patterns or signatures to identify known threats or vulnerabilities.
- False positive is a term used to describe when a security tool incorrectly identifies legitimate activity as malicious.
In this scenario, the analyst is specifically capturing HTTP traffic, which is a network protocol, and analyzing patterns within it. This aligns with the definition of network telemetry.
Which statement accurately describes the difference between a network-based intrusion detection system (NIDS) and a host-based intrusion detection system (HIDS)?
A NIDS is installed on a network; a HIDS is installed on individual devices.
A NIDS is installed on a network and is used to collect and monitor network traffic and network data. A HIDS is installed on a host and is used to monitor the activity of the host.
AI
The accurate statement is: A NIDS is installed on a network; a HIDS is installed on individual devices.
Explanation of Key Differences:
- NIDS (Network-Based Intrusion Detection System):
- Monitors network traffic across multiple devices.
- Captures and analyzes packets to detect suspicious activity.
- Installed at strategic points within the network infrastructure (e.g., firewalls, routers).
- HIDS (Host-Based Intrusion Detection System):
- Installed on individual hosts or devices.
- Monitors system events, file integrity, and application behavior.
- Detects intrusions specific to that particular system.
Additional Points:
- Both NIDS and HIDS can use signature-based or anomaly-based detection techniques.
- NIDS can detect both known and unknown threats based on their behavior.
- HIDS can provide more granular insights into the activities within a specific host.
- They often complement each other to provide comprehensive intrusion detection.
Fill in the blank: The _____ component of an IDS signature includes network traffic information.
header
The header component of an IDS signature includes network traffic information. This includes source and destination IP addresses, source and destination ports, protocols, and traffic direction.
AI
The answer is header.
Explanation of IDS Signature Components:
- Header: This section specifies the network traffic characteristics that the signature is designed to match. It includes:
- Protocol (e.g., TCP, UDP, HTTP)
- Source and destination IP addresses or networks
- Source and destination ports
- Direction of traffic (inbound or outbound)
- Action: This component defines what action the IDS should take if the signature matches, such as generating an alert, logging the event, or dropping the traffic.
- Rule Options: These are additional conditions or filters that can be applied to the signature, such as content matching (looking for specific patterns within packets) or time-based activation (only triggering during certain periods).
- Signature ID: This is a unique identifier assigned to each signature, used for reference and management purposes.
A security analyst creates a Suricata signature to identify and detect security threats based on the direction of network traffic. Which of the following rule options should they use?
Flow
They should use flow. The flow option matches the direction of network traffic flow.
AI
The correct answer is Flow.
Explanation of Suricata Rule Options:
- Flow: This option allows you to specify the direction of network traffic that should trigger the signature. It can be used to detect threats based on whether they are inbound (entering the network) or outbound (leaving the network).
- Message: This option is used to define a custom message that will be displayed in alerts generated by the signature. It’s for providing informative details about the detected threat.
- Rev: This option indicates the revision number of the signature, used for tracking updates and changes.
- Content: This option enables you to search for specific patterns or content within network packets. It’s helpful for detecting threats based on their payload or data content.
In this scenario, since the analyst aims to detect threats based on traffic direction, the flow
option is most suitable.
Overview of security information event management (SIEM)
Video: Reexamine SIEM tools
- SIEMs are crucial tools for security analysts: They collect, analyze, and report on security data from multiple sources.
- Data collection and processing: SIEMs gather massive amounts of data from various devices and systems, normalize it into a consistent format, and index it for search.
- Adapting to different SIEM tools: Security analysts need to be flexible and able to use various SIEM platforms like Splunk and Chronicle.
- SIEM platform functionalities: Both Splunk and Chronicle collect, process, store, and provide search capabilities for analyzing security data.
Key takeaway: SIEMs are powerful tools that empower security analysts to efficiently access and analyze security data across an organization, aiding in incident investigations, alert triaging, and system monitoring.
Security Information and Event Management (SIEM) tools are the Swiss Army knives of security analysts, offering a central hub for data collection, analysis, and threat detection. But just like any trusty tool, it’s essential to periodically re-evaluate your SIEM’s effectiveness and ensure it’s still aligned with your evolving security needs.
Why Re-examine Your SIEM?
The security landscape is constantly shifting, with new threats and vulnerabilities emerging all the time. Your SIEM, however, might not be keeping pace. Re-examining your SIEM helps you:
- Identify gaps in coverage: Are you collecting and analyzing data from all your critical sources? Are there blind spots that attackers could exploit?
- Assess rule effectiveness: Are your SIEM rules generating too many false positives, drowning out real threats? Or are they missing subtle indicators of compromise?
- Optimize resource allocation: Are you overburdening your SIEM with irrelevant data? Can you streamline resource allocation for better performance?
- Adapt to new threats: Can your SIEM handle the latest attack vectors and malware strains? Does it integrate with other security tools for a holistic view?
Steps for Re-examining Your SIEM
- Review Data Sources:
- Mapping the terrain: Make a comprehensive list of all data sources generating security-relevant information (firewalls, endpoints, applications, cloud platforms).
- Completeness check: Ensure your SIEM is ingesting data from all identified sources. Are there any gaps or inconsistencies?
- Data prioritization: Analyze the volume and value of data from each source. Prioritize collection and analysis based on criticality and threat potential.
- Evaluate Rule Effectiveness:
- False friend fatigue: Analyze your SIEM logs for a baseline of false positives and negatives. Are existing rules generating too much noise?
- Fine-tuning the filters: Refine your rules to minimize false alerts while capturing genuine threats. Consider factors like user behavior, device type, and anomaly detection.
- Threat intelligence integration: Leverage threat intelligence feeds to update your rules and stay ahead of emerging attack patterns.
- Optimize Resource Allocation:
- Data filtering at the source: Implement pre-filtering mechanisms at data sources to reduce irrelevant information reaching your SIEM.
- Data retention policies: Define data retention policies based on compliance requirements and historical analysis needs. Don’t overload your SIEM with unnecessary data.
- Hardware and software upgrades: Consider upgrading your SIEM hardware and software to handle increased data volume and complexity.
- Adapt to New Threats:
- Threat landscape awareness: Stay updated on the latest cyber threats and attack vectors. Research emerging malware strains and vulnerabilities relevant to your environment.
- SIEM capabilities assessment: Evaluate your SIEM’s ability to detect and respond to these new threats. Does it offer features like threat hunting, behavioral analysis, and incident response tools?
- Integration with other security tools: Consider integrating your SIEM with other security solutions like endpoint detection and response (EDR) or threat intelligence platforms for a broader security posture.
Conclusion:
Re-examining your SIEM is not a one-time event but an ongoing process to ensure it remains your most valuable security ally. By regularly assessing its coverage, rule effectiveness, resource allocation, and adaptability, you can keep your SIEM sharp and your organization secure in the ever-evolving threat landscape.
Remember, your SIEM is only as effective as the data it ingests and the rules it uses. By continually refining these aspects, you can transform your SIEM from a data repository into a proactive threat detection and response engine.
Which of the following steps are involved in the SIEM process for data collection? Select three answers.
Normalize, Collect and process, Index
The SIEM process for data collection is made up of the following three steps: collect and process, normalize, and index. Indexing is the step that sorts data so it can be easily searched and accessed.
As a security analyst, you’ll need to be
able to quickly access the relevant data required to perform your duties. Whether it’s triaging alerts,
monitoring systems, or analyzing log data during
incident investigations, a SIEM is the tool for this job. As a quick review, a SIEM is
an application that collects and analyzes log data to monitor critical
activities in an organization. It does this by collecting, analyzing, and reporting on security data
from multiple sources. Previously, you learned about the SIEM
process for data collection. Let’s revisit this process. First, SIEM
tools COLLECT AND PROCESS enormous amounts of data generated by devices and
systems from all over an environment. Not all data is the same.
As you already know, devices generate data
in different formats. This can be challenging because there is
no unified format to represent the data. SIEM tools make it easy for
security analysts to read and analyze data by NORMALIZING it. Raw data gets processed, so
that it’s formatted consistently and only relevant event
information is included. Finally, SIEM tools INDEX the data, so
it can be accessed through search. All of the events across all the different
sources can be accessed with your fingertips. Isn’t that useful? SIEM tools make it easy
to quickly access and analyze the data flows happening
across networks in an environment. As a security analyst, you may
encounter different SIEM tools. It’s important that
you’re able to adjust and adapt to whichever tool your
organization ends up using. With that in mind, let’s explore some SIEM
tools currently used in the security industry. Splunk is a data analysis platform. Splunk Enterprise Security provides
SIEM solutions that let you search, analyze, and visualize security data. First, it collects data from different
sources. That data gets processed and stored in an index. Then, it can be
accessed in a variety of different ways, like through search. Chronicle is Google Cloud’s SIEM,
which stores security data for search, analysis, and visualization. First, data gets forwarded to Chronicle. This data then gets normalized, or cleaned
up, so it’s easier to process and index. Finally, the data becomes available
to be accessed through a search bar. Next up, we’ll explore how to
search on these SIEM platforms.
Reading: Log sources and log ingestion
Reading
n this reading, you’ll explore more on the importance of log ingestion. You may recall that security information and event management (SIEM) tools collect and analyze log data to monitor critical activities in an organization. You also learned about log analysis, which is the process of examining logs to identify events of interest. Understanding how log sources are ingested into SIEM tools is important because it helps security analysts understand the types of data that are being collected, and can help analysts identify and prioritize security incidents.
SIEM process overview
Previously, you covered the SIEM process. As a refresher, the process consists of three steps:
- Collect and aggregate data: SIEM tools collect event data from various data sources.
- Normalize data: Event data that’s been collected becomes normalized. Normalization converts data into a standard format so that data is structured in a consistent way and becomes easier to read and search. While data normalization is a common feature in many SIEM tools, it’s important to note that SIEM tools vary in their data normalization capabilities.
- Analyze data: After the data is collected and normalized, SIEM tools analyze and correlate the data to identify common patterns that indicate unusual activity.
This reading focuses on the first step of this process, the collection and aggregation of data.
Log ingestion
Data is required for SIEM tools to work effectively. SIEM tools must first collect data using log ingestion. Log ingestion is the process of collecting and importing data from log sources into a SIEM tool. Data comes from any source that generates log data, like a server.
In log ingestion, the SIEM creates a copy of the event data it receives and retains it within its own storage. This copy allows the SIEM to analyze and process the data without directly modifying the original source logs. The collection of event data provides a centralized platform for security analysts to analyze the data and respond to incidents. This event data includes authentication attempts, network activity, and more.
Log forwarders
There are many ways SIEM tools can ingest log data. For instance, you can manually upload data or use software to help collect data for log ingestion. Manually uploading data may be inefficient and time-consuming because networks can contain thousands of systems and devices. Hence, it’s easier to use software that helps collect data.
A common way that organizations collect log data is to use log forwarders. Log forwarders are software that automate the process of collecting and sending log data. Some operating systems have native log forwarders. If you are using an operating system that does not have a native log forwarder, you would need to install a third-party log forwarding software on a device. After installing it, you’d configure the software to specify which logs to forward and where to send them. For example, you can configure the logs to be sent to a SIEM tool. The SIEM tool would then process and normalize the data. This allows the data to be easily searched, explored, correlated, and analyzed.
Note: Many SIEM tools utilize their own proprietary log forwarders. SIEM tools can also integrate with open-source log forwarders. Choosing the right log forwarder depends on many factors such as the specific requirements of your system or organization, compatibility with your existing infrastructure, and more.
Key takeaways
SIEM tools require data to be effective. As a security analyst, you will utilize SIEM tools to access events and analyze logs when you’re investigating an incident. In your security career, you may even be tasked with configuring a SIEM to collect log data. It’s important that you understand how data is ingested into SIEM tools because this enables you to understand where log sources come from which can help you identify the source of a security incident.
Resources
Here are some resources if you’d like to learn more about the log ingestion process for Splunk and Chronicle:
Video: Query for events with Splunk
- SIEM data access: Queries help navigate massive amounts of event data stored in a SIEM database.
- Query specificity: Broad queries are slow and ineffective, while specific queries with keywords, operators, and wildcards return relevant results faster.
- Splunk search example: Searching for “buttercupgames error OR fail*” within a specific 30-day timeframe yields event details and highlights matched terms.
- Refining results: Filtering out unwanted hosts (e.g., www1) further narrows down the search and delivers more targeted information.
- Next steps: Learning advanced search commands in Splunk and querying events in another SIEM tool, Chronicle.
Key takeaway: Effective SIEM search queries are crucial for security analysts to efficiently navigate and analyze event data for security investigations and threat detection.
Splunk, with its robust search capabilities, is a security analyst’s best friend when navigating the vast ocean of security event data. This tutorial equips you with the knowledge to craft effective Splunk queries, enabling you to quickly pinpoint relevant information and investigate potential threats.
Understanding the Splunk Search Bar:
Imagine the Splunk search bar as your captain’s compass, guiding you through the data seas. It’s where you type your search terms and commands to retrieve specific events.
Building Basic Queries:
- Keywords: Start with simple keywords relevant to your search. For example, if you’re investigating unauthorized access attempts, your query might be “user=unknown AND login=fail.”
- Operators: Logical operators like AND, OR, and NOT refine your search. AND narrows down results (e.g., “source=webserver AND error=500”), while OR broadens them (e.g., “user=admin OR user=root”). NOT excludes unwanted events (e.g., “source=firewall NOT severity=low”).
- Wildcards: Use * and ? for flexibility. * matches any character sequence (e.g., “process*”), while ? matches any single character (e.g., “user=j?hn”).
Advanced Search Techniques:
- Time Ranges: Specify a timeframe for your search using “@epoch” or relative time formats like “last 7 days.”
- Fields and Values: Extract specific data from events using field names and values. For example, “source=IDS AND severity=high | stats count by dest_ip” groups events by destination IP for high-severity IDS alerts.
- Subsearches: Nest searches within searches for deeper analysis. Imagine searching for failed logins followed by a specific file access event, like “user=hacker AND login=fail | search file_access=confidential_data.”
Tips for Effective Splunk Queries:
- Start simple: Begin with basic keywords and gradually add complexity as needed.
- Test and refine: Execute your query and analyze the results. Refine your search terms and operators to get the desired outcome.
- Use search commands: Splunk offers a plethora of commands for data manipulation, aggregation, and visualization. Explore the Splunk documentation to unlock their potential.
- Combine with filters: Leverage Splunk’s powerful filters to narrow down your search further based on specific criteria.
Beyond the Basics:
This tutorial lays the foundation for Splunk query construction. As you delve deeper, explore advanced techniques like regular expressions, custom searches, and data enrichment to unlock Splunk’s full analytical power. Remember, practice makes perfect, so keep querying and honing your skills to become a Splunk search master!
Additional Resources:
- Splunk Search Tutorial: https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Search
- Splunk Search Reference: https://docs.splunk.com/Documentation/SCS/current/SearchReference/SearchCommandOverview
- Splunk Enterprise Security User Guide: https://docs.splunk.com/Documentation
By mastering Splunk queries, you transform this versatile tool from a data repository into a powerful investigative and analytical engine, safeguarding your organization from potential threats. So, set sail on your Splunk search journey and discover the hidden treasures within your security event data!
Do specific queries improve the speed and relevance of SIEM search results?
Yes
Specific queries improve the speed and relevance of SIEM search results.
Now that we’ve reviewed
how a SIEM works, let’s learn how to search and query events in a SIEM database. Data that’s been imported
into a SIEM can be accessed by entering queries into
the SIEM’s search engine. Massive amounts of data can
be stored in a SIEM database. Some of this data
may date back years. This can make searching for
security events challenging. For example, let’s say you’re searching to find a
failed login event. You search for the event using
the keywords: failed login. This is a very broad query, which can return thousands of results. Broad search queries like this, slow down the response
times of a search engine since it’s searching
across all the indexed data. But, if you specify
additional parameters, like an event ID and a
date and time range, you can narrow down the
search to get faster results. It’s important that
search queries are specific, so
that you can find exactly what you’re
looking for and save time in the search process. Different SIEM tools use
different search methods. For example, Splunk uses its own query language called Search Processing Language, or SPL for short. SPL has many different search
options you can use to optimize search
results, so that you can get the data
you’re looking for. For now, I’ll demonstrate a raw log search in Splunk
Cloud for events that reference errors or failures for a fictional online store
called Buttercup Games. First, we’ll use the search
bar to type in our query: buttercupgames error
OR fail* This search is
specifying the index, which is buttercupgames. We also specify the search
terms: error OR fail. The Boolean operator OR ensures that both of the keywords
will be searched. The asterisk at the
end of the term fail* is known
as a wildcard. This means it will search for all possible endings that
contain the term fail. This helps us expand
our search results because events may label
failures differently. For example, some events
may use the term failed. Next, we’ll select a time range using the time range picker. Remember, the more specific
our search is, the better. Let’s search for data
from the last 30 days. Under the search bar, we
have our search results. There’s a timeline,
which gives us a visual representation of the number of events
over a period. This can be helpful
in identifying event patterns such as
peaks in activity. Under the timeline, there’s
the events viewer, which gives us a list of events
that match our search. Notice how our search terms: buttercupgames and error are highlighted in each event. It doesn’t appear
that any events matching with the
term fail were found. Each event has a timestamp
and raw logged data. For the events with errors, it appears that there’s
an error relating to the HTTP cookies used in the
Buttercup Games website. At the bottom of
the raw log data, there’s some information
related to the data source, including the host name, source, and source type. This information tells
us where the event data originated from such as
a device or file. If we click on it,
we can choose to exclude it from the
search results. On the search bar,
we can examine that the search terms
have been changed and host!=www1
has been added, which means not to
include www1 hosts. Notice that the new
search results do not contain www1 as a host, but contain www2 and www3. This is just one of the many
ways that you can target your searches to retrieve
information you’re looking for. This search is known
as a raw log search. As a security analyst, you’ll use different
commands to optimize search performance for
faster search results. That completes
querying in Splunk. You’ve learned the importance
of effective queries and how to perform a
basic Splunk search. Coming up, you’ll learn how
to query events in Chronicle.
Video: Query for events with Chronicle
Chronicle Search Features:
- Search Fields: Search for specific terms like hostnames, domains, IPs, URLs, emails, usernames, and file hashes.
- Search Types:
- UDM Search (default): Searches normalized data for easier analysis.
- Raw Log Search: Searches raw, un-normalized logs for more granular details or troubleshooting.
- YARA-L Language: Define custom rules for detecting specific events or patterns.
Performing a UDM Search:
- Structured Query Builder: Use this tool for crafting UDM queries.
- Specify Fields and Values: Structure your query using UDM fields and values (e.g.,
metadata.event_type = "USER_LOGIN"
). - Logical Operators: Use operators like AND, OR, and NOT to refine your search criteria.
- Execute the Query: Click the “Query” button to run the search.
Search Results:
- Timeline: Visualizes event occurrences over time, aiding pattern recognition.
- Event List: Displays matching events with timestamps and associated assets (devices).
- Raw Logs: Access detailed raw logs for each event by clicking on it.
- Quick Filters: Further refine results by filtering based on additional fields and values (e.g., target IP addresses).
Chronicle empowers security analysts to navigate the ocean of security events with its powerful search capabilities. This tutorial equips you with the knowledge to craft effective Chronicle queries, enabling you to quickly pinpoint relevant information and investigate potential threats.
Understanding the Chronicle Search Canvas:
Imagine Chronicle’s search canvas as your investigation workbench. This is where you build your queries by combining search terms, filters, and operators.
Building Basic Queries:
- Keywords: Begin with simple keywords relevant to your search. For example, if you’re investigating suspicious file access, your query might be “file_access AND (confidential OR sensitive).”
- Unified Data Model (UDM) Fields: Chronicle organizes data using UDM fields like
source
,event_type
,username
, andfile_hash
. Utilize these fields for targeted searches. - Operators: Logical operators like AND, OR, and NOT refine your search. AND narrows down results (e.g., “source=firewall AND action=blocked”), while OR broadens them (e.g., “username=malicious OR ip=malware_c2”). NOT excludes unwanted events (e.g., “event_type!=heartbeat”).
Advanced Search Techniques:
- Time Ranges: Specify a timeframe for your search using “@epoch” or relative times like “last 7 days.”
- Filters: Narrow down your search by applying filters based on various criteria like device type, severity level, or specific timestamps.
- Subsearches: Nest searches within searches for deeper analysis. Imagine searching for suspicious file access, followed by a specific user login event, like “file_access AND confidential | search username=suspicious_user.”
- YARA-L Rules: For advanced threat detection, utilize YARA-L, a rule-based language, to define custom patterns for identifying specific activities or malware characteristics.
Tips for Effective Chronicle Queries:
- Start simple: Begin with basic keywords and gradually add complexity as needed.
- Test and refine: Execute your query and analyze the results. Refine your search terms and filters to get the desired outcome.
- Utilize UDM fields: Understand and leverage UDM fields for precise data exploration.
- Combine with filters: Employ filters to effectively narrow down your search and focus on relevant data.
Beyond the Basics:
This tutorial lays the foundation for Chronicle query construction. As you delve deeper, explore advanced techniques like regular expressions, custom time ranges, and data enrichment to unlock Chronicle’s full analytical power. Remember, practice makes perfect, so keep querying and honing your skills to become a Chronicle search master!
Additional Resources:
- Chronicle Search Tutorial: https://cloud.google.com/chronicle/docs
- Chronicle UDM Reference: https://cloud.google.com/chronicle/docs
- Chronicle YARA-L Guide: https://cloud.google.com/chronicle/docs
By mastering Chronicle queries, you transform this versatile tool from a data repository into a powerful investigative and analytical engine, safeguarding your organization from potential threats. So, set sail on your Chronicle search journey and discover the hidden treasures within your security event data!
Fill in the blank: Chronicle uses _____ to search through normalized data.
Unified Data Model (UDM)
Chronicle uses UDM to search through normalized data.
Chronicle allows you to
search and filter log data. In this video,
we’ll explore using Chronicle’s search field
to locate an event. Chronicle uses the
YARA-L language to define rules for detection. It’s a computer
language used to create rules for searching
through ingested log data. For example, you can use YARA-L
to write a rule to detect specific activities related to the exfiltration
of valuable data. Using Chronicle’s search field, you can search for fields
like hostname, domain, IP, URL, email, username, or file hash. Using the search field, you can enter different
types of searches. The default method of
search is using UDM search, which stands for
Unified Data Model. It searches through
normalized data. If you can’t find the
data you’re looking for searching the
normalized data, you have the option of
searching raw logs. Raw log search searches through the logs which have
not been normalized. From our earlier discussion
on the SIEM process, you may recall that raw logs get processed during the
normalization step. During normalization, all
of the relevant information from raw logs gets
extracted and formatted, making the data
easier to search. A reason we might need to
search raw logs is to find data that may not have been included in the normalized logs, like specific fields
which have not been normalized, or to troubleshoot
data ingestion problems. Let’s examine a UDM search for a failed login
using Chronicle. First, let’s click on the structured query
builder icon, so that we can perform
a UDM search. I’ll type in the search: metadata.event_type = “USER_LOGIN” AND security_result.action
= “BLOCK” Let’s break down
this UDM search. Since we are searching
for normalized data, we need to specify a search
that uses UDM format. UDM events have a set
of common fields. The metadata.event_type
field details the event’s type. Here, we’re asking
Chronicle to find an authentication activity
event, a user login. Next, there’s AND, which is a logical
operator that tells the search engine to
contain both terms. Finally, the
security_result.action field specifies a security action
such as allow or block. Here, the action is BLOCK. This means the user login
was blocked or failed. Now, we’ll press
the query button. We’re going to focus on
searching normalized data. We’re presented with a screen
with the search results. There’s lots of
information here. Under UDM Search, we can observe our search terms. There’s also a bar
graph timeline visualizing the failed
login events over a period. At a quick glance, this gives us a snapshot of the failed
login activity over time, allowing us to spot
possible patterns. Under the timeline, there’s
a list of events with timestamps associated
with this search. Under each event,
there’s an asset, which is the name of a device. For example, this event shows a failed login for
a user named alice. If we click the
event, we can open up the raw log associated
with the event. We can interpret these
raw logs for more detail about the event’s activity
during the investigation. To the left, there’s
Quick Filters. These are additional
fields or values that we can use to filter
the search results. For example, if we
click: target.ip, we are given a list
of IP addresses. If we click one of
these IP addresses, we can filter the search
results to contain only this target IP address. This helps us find specific data we’re looking for and helps us save time in the
process. Great work! Now you know how to perform
a search using Chronicle. In the upcoming activity, you’ll have the chance
to perform searches using the SIEM tools
we’ve just discussed.
Reading: Search methods with SIEM tools
Reading
So far, you’ve learned about how you can use security information and event management (SIEM) tools to search for security events such as failed login attempts. Remember, SIEM is an application that collects and analyzes log data to monitor critical activities in an organization. In this reading, you’ll examine how SIEM tools like Splunk and Chronicle use different search methods to find, filter, and transform search results.
Not all organizations use the same SIEM tool to gather and centralize their security data. As a security analyst, you’ll need to be ready to learn how to use different SIEM tools. It’s important to understand the different types of searches you can perform using SIEM tools so that you can find relevant event data to support your security investigations.
Splunk searches
As you’ve learned, Splunk has its own querying language called Search Processing Language (SPL). SPL is used to search and retrieve events from indexes using Splunk’s Search & Reporting app. An SPL search can contain many different commands and arguments. For example, you can use commands to transform your search results into a chart format or filter results for specific information.
Here is an example of a basic SPL search that is querying an index for a failed event:
index=main fail
- index=main: This is the beginning of the search command that tells Splunk to retrieve events from an index named main. An index stores event data that’s been collected and processed by Splunk.
- fail: This is the search term. This tells Splunk to return any event that contains the term fail.
Knowing how to effectively use SPL has many benefits. It helps shorten the time it takes to return search results. It also helps you obtain the exact results you need from various data sources. SPL supports many different types of searches that are beyond the scope of this reading. If you would like to learn more about SPL, explore Splunk’s Search Reference.
Pipes
Previously, you might have learned about how piping is used in the Linux bash shell. As a refresher, piping sends the output of one command as the input to another command.
SPL also uses the pipe character | to separate the individual commands in the search. It’s also used to chain commands together so that the output of one command combines into the next command. This is useful because you can refine data in various ways to get the results you need using a single command.
Here is an example of two commands that are piped together:
index=main fail| chart count by host
- index=main fail: This is the beginning of the search command that tells Splunk to retrieve events from an index named main for events containing the search term fail.
- |: The pipe character separates and chains the two commands index=main and chart count by host. This means that the output of the first command index=main is used as the input of the second command chart count by host.
- chart count by host: This command tells Splunk to transform the search results by creating a chart according to the count or number of events. The argument by host tells Splunk to list the events by host, which are the names of the devices the events come from. This command can be helpful in identifying hosts with excessive failure counts in an environment.
Wildcard
A wildcard is a special character that can be substituted with any other character. A wildcard is usually symbolized by an asterisk character *. Wildcards match characters in string values. In Splunk, the wildcard that you use depends on the command that you are using the wildcard with. Wildcards are useful because they can help find events that contain data that is similar but not entirely identical. Here is an example of using a wildcard to expand the search results for a search term:
index=main fail*
- index=main: This command retrieves events from an index named main.
- fail*: The wildcard after fail represents any character. This tells Splunk to search for all possible endings that contain the term fail. This expands the search results to return any event that contains the term fail such as “failed” or “failure”.
Pro tip: Double quotations are used to specify a search for an exact phrase or string. For example, if you want to only search for events that contain the exact phrase login failure, you can enclose the phrase in double quotations “login failure”. This search will match only events that contain the exact phrase login failure and not other events that contain the words failure or login separately.
Chronicle searches
In Chronicle, you can search for events using the Search field. You can also use Procedural Filtering to apply filters to a search to further refine the search results. For example, you can use Procedural Filtering to include or exclude search results that contain specific information relating to an event type or log source. There are two types of searches you can perform to find events in Chronicle, a Unified Data Mode (UDM) Search or a Raw Log Search.
Unified Data Model (UDM) Search
The UDM Search is the default search type used in Chronicle. You can perform a UDM search by typing your search, clicking on “Search,” and selecting “UDM Search.” Through a UDM Search, Chronicle searches security data that has been ingested, parsed, and normalized. A UDM Search retrieves search results faster than a Raw Log Search because it searches through indexed and structured data that’s normalized in UDM.
A UDM Search retrieves events formatted in UDM and these events contain UDM fields. There are many different types of UDM fields that can be used to query for specific information from an event. Discussing all of these UDM fields is beyond the scope of this reading, but you can learn more about UDM fields by exploring Chronicle’s UDM field list. Know that all UDM events contain a set of common fields including:
- Entities: Entities are also known as nouns. All UDM events must contain at least one entity. This field provides additional context about a device, user, or process that’s involved in an event. For example, a UDM event that contains entity information includes the details of the origin of an event such as the hostname, the username, and IP address of the event.
- Event metadata: This field provides a basic description of an event, including what type of event it is, timestamps, and more.
- Network metadata: This field provides information about network-related events and protocol details.
- Security results: This field provides the security-related outcome of events. An example of a security result can be an antivirus software detecting and quarantining a malicious file by reporting “virus detected and quarantined.”
Here’s an example of a simple UDM search that uses the event metadata field to locate events relating to user logins:
metadata.event_type = “USER_LOGIN”
- metadata.event_type = “USER_LOGIN”: This UDM field metadata.event_type contains information about the event type. This includes information like timestamp, network connection, user authentication, and more. Here, the event type specifies USER_LOGIN, which searches for events relating to authentication.
Using just the metadata fields, you can quickly start searching for events. As you continue practicing searching in Chronicle using UDM Search, you will encounter more fields. Try using these fields to form specific searches to locate different events.
Raw Log Search
If you can’t find the information you are searching for through the normalized data, using a Raw Log Search will search through the raw, unparsed logs. You can perform a Raw Log Search by typing your search, clicking on “Search,” and selecting “Raw Log Search.” Because it is searching through raw logs, it takes longer than a structured search. In the Search field, you can perform a Raw Log Search by specifying information like usernames, filenames, hashes, and more. Chronicle will retrieve events that are associated with the search.
Pro tip: Raw Log Search supports the use of regular expressions, which can help you narrow down a search to match on specific patterns.
Key takeaways
SIEM tools like Splunk and Chronicle have their own methods for searching and retrieving event data. As a security analyst, it’s important to understand how to leverage these tools to quickly and efficiently find the information you need. This will allow you to explore data in ways that support detecting threats, as well as rapidly responding to security incidents.
Resources for more information
Here are some resources should you like to learn more about searching for events with Splunk and Chronicle:
- Splunk’s Search Manual on how to use the Splunk search processing language (SPL)
- Chronicle’s quickstart guide on the different types of searches
Reading: Follow-along guide for Splunk sign-up
Practice Quiz: Activity: Perform a query with Splunk
Practice Quiz: Activity: Perform a query with Chronicle
Practice Quiz: Test your knowledge: Overview of SIEM tools
In Search Processing Language (SPL), which special character is a wildcard that can be used to substitute with any other character?
*
In Search Processing Language (SPL), the * character is a wildcard which is a special character that can be substituted with any other character.
AI
The answer is asterisk (*).
In Search Processing Language (SPL), the asterisk (*) functions as a wildcard character, enabling you to match any sequence of characters, including zero characters. It’s a versatile tool for crafting flexible and comprehensive searches.
Here’s how it works:
1. Matching any number of characters:
error*
matches “error,” “errors,” “error404,” “erroneous,” etc.*.log
matches any file ending with “.log” (e.g., “access.log,” “error.log,” “system.log”).
2. Positioning within text:
error*.log
matches “error404.log” but not “error-message.txt” (due to the period).*error*
matches “internal_error” and “network_error_500.”
Additional wildcard character:
- Underscore (_): Matches exactly one character.
err_r
matches “error” but not “errror” or “err.”192.168._._
matches any IP address in the 192.168.x.x range.
Key points:
- Enclose wildcard patterns in double quotes when searching for exact terms.
- Exercise caution with wildcards, as they can potentially slow down searches if used excessively.
- For complex pattern matching, consider using regular expressions, which offer more sophisticated matching capabilities.
Which of the following steps are part of the security information and event management (SIEM) process? Select three answers.
- Collect and process data
- Normalize data so it is ready to read and analyze
- Index data to improve search performance
AI
The following steps are part of the SIEM process:
- Collect and process data: SIEM gathers security-related data from various sources across an organization’s IT infrastructure. This includes firewalls, servers, applications, databases, and more. The data is then normalized and standardized into a common format for analysis.
- Normalize data so it is ready to read and analyze: SIEM ensures that data from different sources is consistent and comparable, enabling meaningful analysis. This involves converting data into a common format, removing redundancies, and filling in missing values.
- Monitor activity and alerts related to intrusions: SIEM continuously monitors the collected data for potential security threats and anomalies. It generates alerts when suspicious activity is detected, such as unauthorized access attempts, malware infections, or data breaches.
Fill in the blank: Chronicle uses _____ to search through unstructured logs.
raw log search
Chronicle uses raw log search to search through unstructured logs.
AI
The answer is raw log search.
Chronicle is designed to efficiently search through vast amounts of unstructured log data directly, without requiring prior schema definition or data normalization. This capability is known as raw log search. It empowers users to quickly uncover insights and investigate threats within raw log data, even when it’s not organized in a structured format.
Key features of Chronicle’s raw log search:
- Scalability: Handles massive data volumes without performance degradation.
- Speed: Delivers rapid search results, enabling quick threat detection and investigation.
- Flexibility: Works with diverse log formats and structures, eliminating the need for upfront data preparation.
- Context: Preserves original log context, providing complete visibility into security events.
- Powerful query language: Supports a rich query language for precise and flexible log exploration.
Chronicle also employs other technologies for structured data analysis:
- Entity search: Identifies and groups related information across data sources to reveal broader patterns and relationships.
- Unified data model: Organizes structured data from various sources into a consistent format for correlation and analysis.
- Metadata: Utilizes metadata to enhance search and analysis capabilities, such as filtering and grouping results based on specific criteria.
Which of the following is Splunk’s query language?
SPL
Splunk uses its own query language known as Search Processing Language (SPL).
AI
The answer is SPL.
SPL stands for Search Processing Language, and it’s the powerful query language specifically designed for interacting with Splunk’s data platform. It enables users to:
- Search: Find specific events or patterns within the indexed data.
- Analyze: Extract insights, trends, and statistics from the data.
- Report: Create visualizations and dashboards to communicate findings effectively.
- Automate: Trigger actions or workflows based on search results.
Key features of SPL:
- Flexibility: Works with structured, semi-structured, and unstructured data.
- Powerful commands: Offers a rich set of commands for filtering, sorting, grouping, aggregating, and transforming data.
- Time-based analysis: Supports time-based functions for examining events over time.
- Statistical analysis: Includes statistical functions for calculating averages, standard deviations, correlations, and more.
- Customizable: Can be extended with custom functions and search macros for tailored analysis.
Other terms mentioned are not Splunk’s query language:
- UDM (Unified Data Model): A data organization approach in Chronicle.
- IDS (Intrusion Detection System): A security tool for detecting intrusions.
- SQL (Structured Query Language): A standard language for relational databases, not primarily used in Splunk.
Review: Network traffic and logs using IDS and SIEM tools
Video: Wrap-up
Accomplishments:
- Mastered Log Analysis: Learned how to read and analyze log files, including creation, formats, and interpretation.
- Demystified IDS: Compared network-based and host-based intrusion detection systems and explored signature writing, detection, and alerting.
- Suricata Savvy: Gained practical experience exploring signatures and alerts using the Suricata command line.
- Splunk & Chronicle Expertise: Developed skills in searching SIEM tools like Splunk and Chronicle through tailored queries.
Skills Gained for a Security Analyst Role:
- In-depth Log Analysis: Analyze network traffic for indicators of compromise with expertise in log analysis.
- Signature Read/Write: Understand how signatures work and write them to detect, log, and alert on intrusions.
- SIEM Proficiency: Access and leverage log data within SIEM tools for effective security monitoring.
Overall Takeaway:
This section equipped you with essential skills for identifying and responding to security threats as a security analyst. You’ve built a strong foundation in log analysis, intrusion detection, and SIEM tool usage, making you well-prepared for further cybersecurity exploration.
Congratulations! You’ve made it
to the end of this section. You’ve made so
much progress in your security journey. Let’s review what we learned. You learned all about how to read and
analyze logs. You examined how log files are created and
used for analysis. You also compared different
types of common log formats and learned how to read them. You extended your understanding on
intrusion detection systems by comparing network-based systems and
host-based systems. You also learned how to
interpret signatures. You examined how signatures are written
and also how they detect, log, and alert on intrusions. You interacted with Suricata in the command line
to examine and interpret signatures and alerts. Lastly, you learned how to search in
SIEM tools like Splunk and Chronicle. You learned about the importance of
crafting tailored queries to locate events. At the forefront of incident response,
monitoring and analyzing network traffic for indicators
of compromise is one of the primary goals. Being able to perform in-depth log
analysis and knowing how to read and write signatures and how to access log data are all skills
that you’ll use as a security analyst.
Reading: Glossary terms from module 4
Terms and definitions from Course 6, Module 4
Anomaly-based analysis: A detection method that identifies abnormal behavior
Array: A data type that stores data in a comma-separated ordered list
Common Event Format (CEF): A log format that uses key-value pairs to structure data and identify fields and their corresponding values
Configuration file: A file used to configure the settings of an application
Endpoint: Any device connected on a network
Endpoint detection and response (EDR): An application that monitors an endpoint for malicious activity
False positive: An alert that incorrectly detects the presence of a threat
Host-based intrusion detection system (HIDS): An application that monitors the activity of the host on which it’s installed
Intrusion detection systems (IDS): An application that monitors system activity and alerts on possible intrusions
Key-value pair: A set of data that represents two linked items: a key, and its corresponding value
Log: A record of events that occur within an organization’s systems
Log analysis: The process of examining logs to identify events of interest
Log management: The process of collecting, storing, analyzing, and disposing of log data
Logging: The recording of events occurring on computer systems and networks
Network-based intrusion detection system (NIDS): An application that collects and monitors network traffic and network data
Object: A data type that stores data in a comma-separated list of key-value pairs
Search Processing Language (SPL): Splunk’s query language
Security information and event management (SIEM): An application that collects and analyzes log data to monitor critical activities in an organization
Signature: A pattern that is associated with malicious activity
Signature analysis: A detection method used to find events interest
Suricata: An open-source intrusion detection system, intrusion prevention system, and network analysis tool
Telemetry: The collection and transmission of data for analysis
Wildcard: A special character that can be substituted with any other character
YARA-L: A computer language used to create rules for searching through ingested log data
Zero-day: An exploit that was previously unknown
Quiz: Module 4 challenge
Which of the following refers to a record of events that occur within an organization’s systems?
Logs
AI
The correct answer is: Logs
Here’s why:
- Occurrences: This term is quite general and could refer to any event, not necessarily within a system.
- Log forwarder: This is a tool that sends logs from one system to another, not the records themselves.
- Log sources: This refers to the systems that generate logs, not the logs themselves.
- Logs: This is the most specific term for a record of events that occur within a system. It includes information about the event, such as the time, source, and details.
Therefore, Logs is the most accurate answer to the question about a record of events within an organization’s systems.
What is the difference between a log and log analysis?
A log is a record of events that occur within an organization’s systems. Log analysis is the process of examining logs to identify events of interest.
AI
The correct answer is: A log is a record of events that occur within an organization’s systems. Log analysis is the process of examining logs to identify events of interest.
Here’s a breakdown of the key differences:
Log:
- Definition: A chronological record of events or actions that have occurred within a system or application.
- Purpose: To track system activity, troubleshoot issues, and maintain a historical record.
- Format: Typically text-based, with structured fields like timestamps, source, event details, and severity levels.
Log Analysis:
- Definition: The process of collecting, parsing, and examining logs to uncover patterns, trends, anomalies, or specific events.
- Purpose: To gain insights into system health, security threats, user behavior, performance bottlenecks, and other relevant information.
- Methods: Involves tools and techniques to search, filter, correlate, visualize, and interpret log data.
Key Differences:
- Logs are the raw data, while log analysis is the process of extracting meaningful information from that data.
- Logs are generated automatically by systems, while log analysis is a proactive activity performed by analysts.
- Logs provide a detailed record of events, while log analysis focuses on identifying patterns and insights within those events.
Examine the following log:
<111>1 2020-04-12T23:20:50.52Z my.machine.com evntslog – ID01 [user@98274 iut=”2″ eventSource=”Mobile” eventID=”24″][Priority@98274 class=”low”] Computer A
What field value indicates the type of device that this event originated from?
Mobile
AI
The field value that indicates the type of device that originated the event is eventSource=”Mobile”.
Here’s a breakdown of the log fields:
- 1: Priority and version of the log message
- 2020-04-12T23:20:50.52Z: Timestamp of the event
- my.machine.com: Hostname of the device generating the log
- evntslog: Name of the log file
- – ID01: Identifier for the log message
- [user@98274 iut=”2″ eventSource=”Mobile” eventID=”24″][Priority@98274 class=”low”]: Structured data fields
- eventSource=”Mobile”: Indicates the event originated from a mobile device
- eventID=”24″: Specific event identifier
- iut=”2″: Unclear from the provided context
- Priority@98274 class=”low”: Event priority is low
- Computer A: Additional information, perhaps the device name
Therefore, the eventSource field, with the value “Mobile”, directly specifies the device type.
Consider the following scenario:
A security analyst at a midsized company is tasked with installing and configuring a host-based intrusion detection system (HIDS) on a laptop. The security analyst installs the HIDS and wants to test whether it is working properly by simulating malicious activity. The security analyst runs unauthorized programs on the laptop, which the HIDS successfully detects and alerts on.
What is the laptop an example of?
An endpoint
AI
The correct answer is An endpoint.
Explanation:
- Endpoint: A device or system that connects to a network, such as a laptop, desktop, server, or mobile device. In this scenario, the laptop is the endpoint where the HIDS is installed and monitoring for malicious activity.
- Signature: A pattern or set of rules used by security tools to identify known threats. While the HIDS likely uses signatures to detect unauthorized programs, the laptop itself is not a signature.
- Agent: A software component that runs on an endpoint and collects data or performs actions on behalf of a central system. The HIDS might have an agent component, but the laptop is the endpoint where the agent resides.
- Log forwarder: A tool that transmits log data from one system to another. While the HIDS might send logs to a central server, the laptop itself is not a log forwarder.
Therefore, the laptop is the endpoint being protected by the HIDS in this scenario.
What information is included in a signature’s header? Select all that apply.
Protocol, Port number, IP address
AI
The information included in a signature’s header typically includes:
- Protocol: The specific protocol the signature is designed to detect (e.g., TCP, UDP, HTTP, ICMP).
- Action: The type of activity or threat the signature is intended to identify (e.g., exploit attempt, malware download, unauthorized access).
The header generally does not include:
- Port Number: While port numbers can be part of signatures, they are more often specified within the signature’s rule body, not the header.
- IP Address: Signatures typically focus on patterns and behaviors rather than specific IP addresses, as attackers often change their IP addresses to evade detection.
Examine this Suricata signature:
alert http 167.215.72.95 any -> 156.150.71.141 80 (msg:”GET on wire”; flow:established,to_server; content:”GET”; sid:12345; rev:2;)
What is the destination port?
80
AI
The destination port in the Suricata signature is 80.
Breakdown of the signature:
- alert http: Indicates an alert for HTTP traffic.
- 167.215.72.95 any -> 156.150.71.141 80: Specifies the traffic flow:
- 167.215.72.95: Source IP address
- any: Source port (any port)
- ->: Traffic direction (from source to destination)
- 156.150.71.141: Destination IP address
- 80: Destination port (port 80 is commonly used for HTTP)
- (msg:”GET on wire”; flow:established,to_server; content:”GET”; sid:12345; rev:2;): Additional signature details:
- msg:”GET on wire”: Message to display when the signature triggers
- flow:established,to_server: Applies to established flows going to the server
- content:”GET”: Looks for the string “GET” in the traffic
- sid:12345: Signature ID
- rev:2: Revision number of the signature
Therefore, the signature is designed to detect HTTP traffic going to port 80 (the destination port) on the server with IP address 156.150.71.141.
Fill in the blank: Suricata uses the _____ format for event and alert output.
EVE JSON
AI
The answer is EVE JSON.
Explanation:
- EVE JSON (Extensible Event JSON): Suricata’s primary format for logging events and alerts. It’s a structured, human-readable, and machine-parseable format based on JSON, making it easily consumable by various SIEM and log analysis tools.
- HTML: HyperText Markup Language, used for web page structure and content, not Suricata’s output format.
- CEF (Common Event Format): Another standard for security event logging, but not Suricata’s default choice.
- HTTP: HyperText Transfer Protocol, used for web communication, not a logging format.
Therefore, Suricata utilizes EVE JSON to present events and alerts in a structured and accessible manner.
Which type of Splunk query searches through unstructured log records?
Raw log search
AI
The correct answer is Raw log search.
Explanation:
- Raw log search: This type of Splunk query directly examines the raw, unaltered contents of log files, allowing you to search even when data isn’t indexed or structured. It’s useful for finding information that might be missed by structured searches or for troubleshooting indexing issues.
- UDM search: UDM (Universal Data Model) searches work with data that has been mapped to a common model, providing a consistent way to query across different data sources. However, it requires data to be structured and indexed.
- Reference search: This type of search uses a lookup file to associate values with keys, often used for mapping IP addresses to hostnames or usernames to employee IDs. It doesn’t directly search raw log records.
- Index search: This is the most common type of Splunk search, working with data that has been indexed and structured. It’s generally faster and more efficient than raw log searches, but it requires data to be properly indexed beforehand.
Therefore, when you need to search through unstructured log records directly, a raw log search is the appropriate approach in Splunk.
Which Unified Data Model (UDM) field search specifies a security action?
security_result.action
AI
The correct answer is security_result.action.
Explanation:
- security_result.action: This UDM field specifically captures the action taken by a security tool or system in response to a detected event. It’s designed to record actions like blocking, allowing, logging, or generating an alert.
- metadata.event_type: This field classifies the type of event, such as a network connection, file access, or process execution, but it doesn’t indicate the action taken.
- block: This is a potential value for the
security_result.action
field, indicating that a specific action was blocked, but it’s not a field name itself. - action: This is a generic term that could refer to various events or activities, but it’s not a standard UDM field for specifying security actions.
Therefore, to search for events based on the security action taken, the most appropriate UDM field to use is security_result.action
.
What are the steps in the SIEM process for data collection? Select three answers.
Collect , Normalize, Index
AI
The steps in the SIEM process for data collection are:
1. Collect:
- Gather log and event data from various sources across the IT infrastructure, such as firewalls, servers, applications, endpoints, and network devices.
2. Normalize:
- Convert the collected data into a common format, ensuring consistency and compatibility for analysis. This often involves parsing different log formats, extracting relevant fields, and mapping data to a standard schema.
3. Unify:
- Combine normalized data from multiple sources into a single, comprehensive view, enabling correlation and analysis across different systems and data types. This involves understanding relationships between different data elements and creating a unified representation.
Other important steps in SIEM data processing (not part of the initial collection phase):
4. Index:
- Create searchable indexes for the unified data, allowing for efficient retrieval and analysis. This involves organizing and structuring the data in a way that enables fast querying and filtering. 5. Analyze:
- Utilize SIEM tools to analyze the collected and indexed data, identifying patterns, anomalies, and potential threats. This involves applying various techniques like correlation, rule-based detection, machine learning, and statistical analysis. 6. Respond:
- Take appropriate actions based on the analysis results, such as alerting security teams, generating incident tickets, blocking malicious activity, or initiating further investigation. This involves integrating SIEM with other security tools and response workflows.
Quiz: Portfolio Activity: Finalize your incident handler’s journal
Reading: Portfolio Activity Exemplar: Finalize your incident handler’s journal
Reading
Completed Exemplar
To review the exemplar for this course item, click the link and select Use Template.
Link to exemplar: Completed incident handler’s journal exemplar
Assessment of Exemplar
Compare the exemplar to your completed activity. Review your work using each of the criteria in the exemplar. What did you do well? Where can you improve? Use your answers to these questions to guide you as you continue to progress through the course.
Note: The exemplar represents one of many possible ways to complete the activity. Yours will likely differ in certain ways. What’s important is that your incident handler’s journal contains journal entries that record the information of activities that you completed and other thoughts and reflections about the course.
The exemplar records details and information from the course activities and includes the following:
- A total of 4 completed journal entries, with the Date, Entry, and Description section filled in
- 2 journal entries include a description of the incident investigation in the The 5 W’s section
- 2 journal entries describe the purpose of a cybersecurity tool in the Tool(s) used section, such as Wireshark, tcpdump, or VirusTotal, and how they were used
- The reflection entry answers the reflection questions in the Reflections/Notes section
Congratulations on completing Course 6!
Video: Course wrap-up
Key Takeaways:
- Incident Response Lifecycle: You grasped the coordinated efforts of security teams, navigating tools like documentation, detection, and management systems.
- Network Traffic Monitoring & Analysis: Mastered capture and analysis of network packets with tools like tcpdump, identifying indicators of compromise.
- Incident Response Phases: You delved into detection, analysis, documentation (chain of custody, playbooks, reports), and post-incident activities.
- Recovery & Post-Incident Strategies: You explored effective approaches for system restoration and further security assessments.
- Log & Alert Interpretation: You trained in reading Suricata signatures and rules, and leveraged SIEM tools (Splunk, Chronicle) for event and log searches.
Real-World Impact:
As a security analyst, you’ll tackle diverse challenges. This course equips you to:
- Investigate Evidence: Analyze data effectively and identify critical information.
- Document & Report: Maintain precise records for clear communication and future reference.
- Respond & Mitigate: Leverage skills and tools to address incidents swiftly and effectively.
The Adventure Continues:
Embrace continuous learning! Soon, you’ll explore Python programming to automate security tasks, expanding your cybersecurity toolkit.
Congratulations! You’ve made significant progress in your security journey. Keep striving for excellence!
Congratulations on
completing this course on detection and response! As you’ve progressed,
we’ve covered a wide range of
topics and tools. Let’s take a moment to
review what you’ve learned. First, we began with an overview of the incident
response lifecycle. You learned how security
teams coordinate their response efforts. And you explored the documentation, detection, and management tools used in incident response. Next, you learned how to monitor and analyze network traffic. You learned about capturing and analyzing packets using
packets sniffers. You also practiced using
tools like tcpdump to capture and
analyze network data to identify indicators
of compromise. Then, we explored
processes and procedures involved in the phases of the incident
response lifecycle. You learned about
techniques related to incident detection and analysis. You also learned about documentation like
chain of custody, playbooks, and final reports. We ended with
exploring strategies used for recovery and
post-incident activity. Finally, you learned how to
interpret logs and alerts. You explored Suricata
on the command line to read and understand
signatures and rules. You also used SIEM
tools like Splunk and Chronicle to search
for events and logs. As a security analyst, you’ll be presented with a
new challenge every day. Whether it’s
investigating evidence or documenting your work, you’ll use what you’ve
learned in this course to effectively
respond to incidents. I’m so glad to have been on this learning journey with you. You’ve done a fantastic job in expanding your knowledge and learning new tools to add
to your security toolbox. One of the things I love about the security field
is that there’s always something new to learn. And coming up, you’ll continue
your learning journey by exploring a programming
language called Python, which can be used to
automate security tasks. Keep up the great work!
Reading: Course 6 glossary
Reading
Reading: Get started on the next course
Reading
Congratulations on completing Course 6 of the Google Cybersecurity Certificate: Sound the Alarm: Detection and Response! In this part of the program, you were introduced to the incident response lifecycle, operations, and associated tools. You learned about network monitoring, network analysis, and you practiced capturing network traffic using network protocol analyzers. You explored more on the processes and procedures in the stages of incident response. Finally, you concluded with learning more about logs, signatures, and alerts. You also practiced using IDS and SIEM tools.
The Google Cybersecurity Certificate has eight courses:
- Foundations of Cybersecurity — Explore the cybersecurity profession, including significant events that led to the development of the cybersecurity field and its continued importance to organizational operations. Learn about entry-level cybersecurity roles and responsibilities.
- Play It Safe: Manage Security Risks — Identify how cybersecurity professionals use frameworks and controls to protect business operations, and explore common cybersecurity tools.
- Connect and Protect: Networks and Network Security — Gain an understanding of network-level vulnerabilities and how to secure networks.
- Tools of the Trade: Linux and SQL — Explore foundational computing skills, including communicating with the Linux operating system through the command line and querying databases with SQL.
- Assets, Threats, and Vulnerabilities — Learn about the importance of security controls and developing a threat actor mindset to protect and defend an organization’s assets from various threats, risks, and vulnerabilities.
- Sound the Alarm: Detection and Response — Understand the incident response lifecycle and practice using tools to detect and respond to cybersecurity incidents. (This is the course you just completed. Well done!)
- Automate Cybersecurity Tasks with Python — Explore the Python programming language and write code to automate cybersecurity tasks.
- Put It to Work: Prepare for Cybersecurity Jobs — Learn about incident classification, escalation, and ways to communicate with stakeholders. This course closes out the program with tips on how to engage with the cybersecurity community and prepare for your job search.
Now that you have completed this course, you are ready to move on to the next course: Automate Cybersecurity Tasks with Python.