Test Pilot

Evaluate how HoundBytes operates in your environment before committing to a long-term engagement. During the pilot we deploy our monitoring and detection stack on a limited set of systems and run it under real operating conditions. This allows your team to evaluate visibility, detection capability, and operational workflow with minimal risk.

Duration: up to 30 days

Pilot at a Glance

Scope

a representative sample of up to 10 production assets

Deployment

HoundBytes monitoring stack installed on pilot systems

Outputs

incident/weekly operational reports + final assessment

Decision

go / no-go

How the Pilot Works

Step 1 — Kickoff

Scoping

We jointly define the scope of the pilot:

selection of up to 10 systems or assets
operating environment
telemetry sources available
evaluation goals and success criteria

This ensures the pilot reflects your real operational environment.

Steps

1. Scoping Kickoff
2. Setup Days 1–3
3. Run Up to 30 days
4. Decision Final report

Deliverables

During the pilot you receive:

  • Pilot kickoff checklist and deployment plan
  • Weekly operational reports
  • Summary of findings and investigations
  • Final pilot assessment report
  • Optional executive summary for leadership or procurement

What We Need From You

To run the pilot we require:

  • Selection of up to 10 assets for monitoring
  • Administrative access required for deployment
  • A technical point of contact for incident and weekly sync
  • Approval for telemetry data retention during the pilot

See Pilot Technical Brief ↓

Ready to Run the Pilot?

Pilot Technical Brief

1. Integration and Implementation

These questions aim to provide a detailed understanding of how the HoundBytes platform will work and integrate with key components of client’s infrastructure.

A. Log Collection and SIEM Integration

1. Compatibility:

What SIEM platforms (e.g. Splunk, ELK, QRadar) do you support natively? What is the integration process if we use a custom or less common SIEM?

We natively support ELK SIEM as Elastic MSSP partners.

For custom or less common SIEM:

  • Integration via Syslog (RFC 5424), REST API or Logstash pipeline
  • Normalization in Elastic Common Schema (ECS

2. Collection Methods:

What log forwarding methods are supported (Syslog, Agent, API, Filebeat)? What is the recommended method to ensure data integrity and security in transit?

All of the above: Syslog, Agent, API, and Filebeat.

3. Data Sources:

How to configure log collection from various sources:

  1. • On-Premise Infrastructure:
  2. Windows/Linux servers, Active Directory, network equipment (firewalls, switches).
  3. Elastic Agent, for network equipment
  4. • Cloud Infrastructure:
  5. AWS (CloudTrail, GuardDuty, VPC Flow Logs), Azure (Sentinel, Activity Logs), Google Cloud.
  6. • SaaS applications: Office 365, Google Workspace, Salesforce.
Category Source Collection Method How to configure (in brief) What logs / key events
On-Prem Windows Server / Endpoint Elastic Agent Install agent → enable Windows Event Logs + Defender → select Security/System/Application → TLS forward Logon, privilege escalation, malware, process execution
On-Prem Active Directory / DC Elastic Agent Install agent on DC → enable AD integration → enable audit policies Kerberos abuse, NTLM downgrade, account takeover
On-Prem Linux Servers Elastic Agent Install agent → enable System + Auditd → define log paths Sudo abuse, persistence, suspicious processes
On-Prem Network Devices (FW, Switch) TCP/TLS Syslog Configure device syslog target → Elastic Agent/Logstash listener → ECS parser Allow/deny traffic, IDS/IPS, config changes
On-Prem Custom Applications Agent / Syslog / API Define log format → ECS mapping → test parsing Auth failures, errors, abuse patterns
Cloud AWS API (IAM Roles) Create read-only IAM Role → enable CloudTrail, GuardDuty → Elastic AWS integration IAM abuse, API misuse, suspicious network
Cloud Azure API (Enter App ID) App Registration → API permissions read-only → enable Activity + Sign-in logs Identity abuse, token misuse
Cloud Google Cloud API (Service Account) Create SA read-only → enable Audit Logs → Elastic GCP integration Privilege abuse, API anomalies
SaaS Microsoft 365 API (Unified Audit Log) Enter App → AuditLog.Read.All → enable Unified Audit Log Mailbox abuse, impossible travel
SaaS Google Workspace API Service Account + domain-wide delegation → Admin SDK Admin abuse, login anomalies
Containers Kubernetes Elastic Agent Deploy agent on nodes → enable K8s integration → API access Pod exec, container escape attempts
Containers Docker Elastic Agent Install agent on host → enable Docker integration Container lifecycle abuse
Network NetFlow / sFlow Flow Logs Configure exporter → Elastic listener → flow parsing Lateral movement, exfiltration
Security Third-party AV/EDR API / Syslog Enable vendor integration → API key / syslog feed Malware alerts, detections

4. Parsing and Normalization:

Do you have pre-defined parsers for our critical applications? How complex is the process of creating a new parser for a custom application?

We use Elastic Common Schema (ECS) to normalize all events.

We have predefined parsers for most enterprise applications and common technologies.

For custom applications, the process of creating a new parser is low-medium complexity and takes an average of 1–3 days, including testing and validation.

The parsers are maintained and updated by the HoundBytes team for the duration of the contract.

5. Latency:

What is the average latency from the generation of an event on a source system to its visibility in your platform?

  • Elastic Agent / Syslog: 10–60 seconds
  • API integrations (Cloud, SaaS): 1–5 minutes

For critical incidents (P1), detection and alerting to the MDR team is achieved in less than 30 minutes thanks to the WorkHorse system.

B. Endpoint Agents (XDR)

1. OS Compatibility:

What operating systems are supported by the XDR agent (versions specific to Windows, macOS, Linux including distributions like CentOS, Ubuntu, RHEL)?

The XDR agent supports:

  • Windows Server and Endpoint (2016+)
  • Linux: Ubuntu, RHEL, Rocky, Alma, CentOS
  • macOS (Apple supported versions)

2. Performance Impact:

What resources does the agent consume on average (CPU, RAM, I/O)? Have you performed performance tests on systems with high load (e.g. database servers, application servers)?

Average agent consumption:

  • CPU: below 2%
  • RAM: 150–300 MB
  • Disk I/O: minimal

The agent has been tested on servers with high load (databases, application servers).

3. Offline Capabilities:

What detection and response capabilities does the agent have if an endpoint is disconnected from the network? How long can it store events and alerts locally?

If an endpoint is offline:

  • Local detections remain active
  • Events are stored locally for 24–72 hours
  • Synchronization is done automatically upon reconnecting

4. Deployment and Management:

How is agent deployment done at scale (e.g. 500 endpoints)? Does it support GPO, SCCM, Ansible or other centralized management tools?

Agent deployment can be done centrally using:

  • GPO
  • SCCM
  • Ansible
  • Custom scripts / CI pipelines

Validated scalability for thousands of endpoints.

5. Software Conflicts:

Are there any known conflicts with other security tools (e.g. antivirus, data loss prevention solutions) that we might have installed?

The agent is compatible with most AV, EDR, and DLP solutions.

No major conflicts are known, exceptions are documented punctually if they occur.

C. Network and Cloud Integration

1. Traffic Monitoring:

Do you perform network traffic analysis (NTA)? Does it require physical/virtual sensors or is it based on logs from firewalls and switches (e.g. NetFlow, sFlow)?

Network traffic analysis (NTA) is performed based on existing logs and flows (firewall logs, NetFlow, sFlow, VPC Flow Logs).

There is no need to install dedicated physical or virtual sensors.

2. Network Level Response:

What response actions can be orchestrated at the network level? (e.g. blocking an IP address in the firewall, isolating a VLAN)

Network-level response actions may include:

  • Blocking IP addresses in the firewall
  • Isolating compromised hosts
  • Deactivating compromised accounts

Actions can be automatic or semi-automatic, with client approval.

3. Cloud Visibility:

How to gain visibility into containers and orchestrators (Docker, Kubernetes)? Does it require a specific agent or does it integrate at the API level with the platform of orchestration?

Visibility in Docker and Kubernetes is achieved through Elastic Agent installed on nodes or the K8s cluster as a manifest and integration with the orchestrator API.

Processes, network activity, and configuration changes are monitored.

CSPM, KSPM, CNVM … etc.

2. Operational Processes and Performance

These questions clarify daily work, alert management, and service performance in real-world scenarios.

A. Alert Management and Incident Response

1. Triage and Correlation:

How does the alert correlation process work? Can you provide a concrete example of how 100 raw alerts from various sources are correlated into a single actionable incident?

The triage and correlation process is based on native capabilities in WorkHorse, using correlated detection rules and the Cases functionality.

Raw alerts generated from multiple sources (endpoint, AD, network, cloud) are automatically correlated on common entities such as user, host, IP, process, and time range. MDR analysts consolidate these alerts into a single Case, which represents the actionable incident.

Example:

In the case of an account compromise, approximately 100 raw alerts (failed logins, abnormal successful logons, suspicious processes, unusual traffic) are correlated and attached into a single Case of type Account Compromise with Lateral Movement. Redundant alerts are eliminated, and the incident is managed centrally from the Case, with a complete timeline, clear impact, and documented response actions.

2. Noise Reduction:

What is the average false-positive rate? What mechanisms do you use to reduce noise and tune alerts specifically for our environment during the pilot?

Noise reduction is achieved by:

  • Multi-source correlation (endpoint, identity, network, cloud) — not isolated alerts
  • Adjusting thresholds and detection rules based on the actual behavior of the environment
  • Contextual whitelisting (user, host, process), not based on static rules
  • Automatic closure of duplicate or low-value alerts
  • Continuous feedback during the pilot for tuning specific to the customer's environment

During the pilot period, tuning is performed iteratively until an optimal level between accuracy and coverage is achieved, without reducing security visibility.

3. Response Process:

Describe a P1 (High Severity) incident scenario step by step, from initial detection (<30 min) to containment (<2h). What actions are automated and where is human intervention required (both from your team and ours)?

Detection:

The incident is automatically detected through correlation rules or from expert analysis, with initial alerting in under 30 minutes.

Validation:

Analysts confirm the incident, create a Case and consolidate relevant alerts. (human intervention)

Notification:

The customer is notified immediately via messaging apps/phone/email, with technical summary and impact. (human)

Containment:

In less than 2 hours, response actions such as isolating the host, blocking IOCs, or disabling compromised accounts are applied.

Actions can be automated (where pre-approved) or validated by the customer for disruptive actions.

Stabilization:

The incident is documented in Cases + reporting, with clear recommendations for remediation and prevention.

4. Communication During Incidents:

What communication channels do you use for critical incidents (phone, email, dedicated portal, Slack/Teams)? Who is our single point of contact?

- Any channel you want and the SPOC is defined at the start of the engagement.

5. Host Isolation:

How does isolating a host technically work? What ports/protocols remain open to allow remote investigation by your team?

Host isolation is achieved through Elastic Agent's XDR capabilities, which automatically blocks all non-essential network traffic at the endpoint level.

During isolation, only the following are allowed:

  • Secure communication agent → MDR platform (TLS)
  • Access required for forensic investigation and collection
  • Synchronization of logs and alerts

No lateral or outbound traffic to other internal systems is allowed, immediately reducing the risk of propagation, while remote investigation remains possible.

B. Reporting and Visibility

1. Dashboard and Portal:

What level of access will we have to the SIEM/MDR platform? Can we create custom dashboards or run our own threat hunting?

Access is controlled by roles (RBAC), so permissions can be tailored to teams or users, IdP can also be used, depending on the contract, you can have access to everything you request, including what you have requested, e.g.:

  • Creating custom dashboards
  • Run your own searches and threat hunting activities
  • Access timelines, alerts and reports

2. Report Types:

What standard reports are included in the service (daily, weekly, monthly)? Can we request customized reports for audit or compliance (e.g. GDPR, ISO27001)?

Upon request, customized audit and compliance reports can be delivered, including GDPR, ISO 27001, as well as other specific compliance requirements, and the frequency can be determined together.

The service includes standard reports:

  • summary of alerts and incidents, operational status
  • srends, significant incidents, tuning performed
  • KPIs, identified risks, improvement recommendations

3. Performance Metrics (KPIs):

How are service performance indicators measured and reported, beyond basic SLAs (e.g. threat hunting success rate, mean remediation time by severity)?

Service performance is measured and reported through clear indicators, beyond basic SLAs, including:

  • MTTD (Mean Time to Detect) and MTTR (Mean Time to Respond), broken down by severity
  • False positive rate after tuning
  • Success rate of threat hunting (validated hunts vs. initiations)
  • Average remediation time by incident type and severity
  • Number and type of incidents prevented through proactive actions

These KPIs are presented in periodic reports and in dedicated dashboards in the SIEM/MDR platform.

3. Support and Organization

These questions target the collaboration model and expertise of the assigned team.

1. Dedicated Team:

What does "dedicated team" mean? How many analysts (Tier 1, 2, 3), and engineers will be assigned to our account? Will we always interact with the same people?

By dedicated team we mean a stable structure assigned to your account, consisting of:

  • Tier 1, Tier 2 and Tier 3 analysts
  • Security engineers for technical support and tuning

The client will constantly interact with the same core team, ensuring operational continuity, environmental awareness, and efficient incident management.

How much exactly? We need to understand your infrastructure level and volume to provide a number.

2. Expertise:

What relevant certifications and experience does the team that will manage our account have (e.g. OSCP, GCIH, experience in our industry)?

  • CSIRT
  • OSCP
  • CRTO
  • CRTP
  • Elastic Security
  • PCNSE

3. Escalation Process:

What is the formal escalation process if we are not satisfied with the handling of an incident or the quality of service?

Level 1: Escalation to HoundBytes SOC Manager for immediate analysis and operational remediation.

Level 2: Escalation to HoundBytes CISO, for strategic review, impact and corrective action.

Level 3 (if necessary): Joint review between client CISO and HoundBytes CISO, with agreed action plan.

4. Knowledge Transfer:

How do you ensure knowledge transfer to our team? Do you offer training sessions or post-incident debriefings?

Knowledge transfer is carried out continuously throughout the collaboration, not just on a one-off basis. It includes:

  • Post-incident debriefings — explaining the cause, impact and lessons learned
  • Periodic knowledge sharing sessions with the client team (SOC, security engineers)
  • Clear documentation in Cases (timeline, decisions, actions)
  • Operational recommendations applicable to the customer's environment

Dedicated training sessions can be organized, adapted to the technical level of the team.

Security and Compliance

1. Data Privacy:

How do you ensure the security and confidentiality of our data and logs that are stored and processed in your platform? Where is the data physically stored (geographic region)?

Data security and confidentiality are ensured through enterprise standard technical and operational controls:

  • Data encryption in transit and at rest
  • Role-based access control (RBAC)
  • Logical separation between clients (tenant isolation)
  • Logging and auditing analyst access to data

Data and logs are stored and processed in the European Union, in accordance with GDPR requirements and good security practices.

2. Cyber Liability Insurance:

Can you provide details about the $5M insurance policy? What exactly does it cover and what is the activation procedure in the event of a major incident?

Cyber & Technology Liability Insurance policy with a limit of EUR 5,000,000, underwritten by Lloyd's Insurance Company SA, through CFC Europe SA.

Key features:

  • Limit: EUR 5,000,000 aggregate/year, respectively EUR 5,000,000 per incident
  • Coverage: Worldwide
  • Period: 12 months
  • Policy type: Technology v4.0

Main areas covered:

  • Professional liability (services delivered, breach of contract, IP infringement)
  • Network Security & Privacy Liability
  • Incident Response (forensics, IR, legal, regulatory, crisis communication)
  • Cyber extortion (ransomware)
  • System damage & business interruption
  • Reputational harm, brand protection, loss mitigation

Explicit exclusions:

  • Cyber Crime — The policy covers liability, response costs, investigation, operational and legal impact, but does not replace a dedicated crime insurance policy.
  • General Liability — Coverage is strictly related to technological and cyber risks, not to the general business of the company.

Activation procedure in case of major incident (P1):

  • Immediately notified to the CFC team by:
    1. - 24/7 answer line or
    2. - dedicated claims email
  • Official claim is opened, managed by CFC Claims Limited
  • Incident Response costs can be activated immediately (IR, forensics, legal)
  • Privacy breach notifications are made only with the insurer's prior written consent, according to the terms of the policy

This policy provides an additional level of financial and operational protection for customers in high-impact cyber incident scenarios.

3. Vulnerability Management:

Does your service include vulnerability scanning? How are vulnerabilities identified in our infrastructure prioritized and communicated?

The service may include vulnerability scanning as part of the MDR package or as a complementary service, depending on the client's requirements.

The identified vulnerabilities are:

  • Prioritized based on risk, taking into account severity (CVSS), exploitability and real exposure
  • Correlated with active alerts from SIEM to highlight risks with immediate impact
  • Clearly communicated through periodic reports and dedicated notifications for critical vulnerabilities

Remediation recommendations are actionable and tailored to the customer's environment.

4. Exit from the Contract (Exit Plan):

In the event of contract termination, what is the procedure for exporting all data, logs, reports and customized configurations? How long is the data retained after contract termination?

In case of contract termination, HoundBytes ensures the complete export of the client's data, including raw and normalized logs, alerts, incidents (Cases), reports and custom configurations. The export is performed in standard formats (JSON, CSV, PDF), via API or secure transfer, to the infrastructure indicated by the client.

The data is retained for a limited period after the termination of the contract, exclusively for the completion of the export and any operational clarifications or legal requirements. After this period, the data is permanently deleted from HoundBytes systems, including backups, in accordance with GDPR requirements.