Support Engineering

Methodology & Practice

5+ years of Technical Support and Technical Account Management across enterprise software, cloud platforms, and developer tooling. This page documents the process, not just the outcomes.

Role types

TSE · TAM · Incident Command

Experience

5+ years

Environments

Windows · macOS · Linux

Observability

Datadog · PagerDuty · Zendesk

Process

Issue Triage Framework

Every issue enters the same decision tree regardless of severity. The process exists to prevent cognitive shortcuts, the temptation to jump to a known solution before confirming the actual problem.

GATHER / RETRYMAIN TRIAGE PATHESCALATENoYesretryNoYesYesNo — investigateIssue ReportedCollect Symptomsenv · steps · errors · timelineReproducible?Gather More Infologs · repro envIsolate LayerUI → API → DB → CloudLayer Isolated?Escalatewith full packetKnown Pattern?Apply Known Fixrunbook / precedentResolve + Document= process step= decision point= start / end hover nodes for detail

Symptom Collection

Never start with solutions. Collect exact error messages, reproduction steps, environment details, and a timeline of when behaviour changed. Half the time the root cause is in the context, not the symptom.

Controlled Reproduction

Reproduce in isolation before touching production. Build the minimum environment that shows the issue, this eliminates environmental variables and prevents compounding the problem during investigation.

Layer Isolation

Work from the outside in. Confirm the request reaches the server before checking server logic. Confirm data reaches the DB before checking queries. Each confirmed layer narrows the search space by half.

Escalation Clarity

Escalate with a complete packet: reproduction steps, layers already eliminated, current hypothesis, and specific ask. Never escalate a vague problem. A well-formed escalation gets resolved 3× faster.

Root Cause Documentation

The resolution is not the end of the work. Every non-trivial issue gets a root cause write-up. The goal is to ensure no engineer has to rediscover the same path through the same problem.

Customer Communication

Customers want honesty and progress updates, not polished non-answers. Communicate what you know, what you don't know, and what the next step is. Silence is the fastest path to escalation.

Case Studies

War Stories

ActionIQ — CDP Data Export Failures (Transient or Systemic?)

Customer Support Engineer · ActionIQ (CDP) · Severity: High · Production

Resolved

Identified memory-induced ingest timeouts as the root cause of transient-looking export failures. Worked with Engineering to scale worker allocation. Resolved the failures permanently without disrupting the customer.

Symptom

A premium support customer was experiencing a higher volume of data export failures than expected. The initial error indicated an expected output file was blank. Critically, these jobs succeeded on retry, which initially pointed to a transient infrastructure issue rather than a systemic defect.

Triage approach

Retries masking failures is a classic signal that something upstream is flaky under load, not randomly failing. I pulled the job timeline in Datadog for the failing export runs and traced backward through the downstream dependency chain, specifically looking at what the export job depended on before it could run.

Root cause

The daily ingest tasks that fed the export pipeline were timing out due to memory pressure. The exports themselves were not the problem, they were waiting on data that was never fully materialized. Once the ingest completed on retry, the export succeeded. The memory issue was invisible at the export layer.

Resolution

Escalated to Engineering with a clear packet: exact job IDs, Datadog trace screenshots, the ingest timeout timestamps aligned to the export failure timestamps, and a specific ask: increase the worker allocation for this customer's ingest jobs. Engineering increased allotted workers. Ingest tasks completed within their time window. Export failures stopped.

Customer communication

Maintained proactive communication with the customer throughout, explained what we knew (export jobs were failing), what we were investigating (upstream dependencies), and the timeline. Avoided vague reassurances. Sent a summary with root cause and resolution after Engineering deployed the fix.

Impartner TAM — Splunk Partner Portal Rebrand (6 Custom ASPX Pages)

Technical Account Manager · Impartner · Splunk Partner Portal · Duration: 3 months

Resolved

Delivered 6 new ASP.NET/ASPX web pages and a new badge navigation system enabling Splunk's partner recognition program across product lines on schedule, tested against acceptance criteria I wrote from stakeholder requirements.

The project

Splunk was rebranding their partner portal and launching a new sales badge program with recognition for partners who achieved product certification across different Splunk product lines. They needed 6 new web pages, a new navigation system for the badges, and updated components on existing sales partner profiles and lead/opportunity dashboards.

Stakeholder coordination

The portal rebrand touched RevOps, Marketing, UI/UX, and the Salesforce Admin team, all with different priorities and different definitions of done. I led requirements gathering across all four teams, translated business requirements into technical acceptance criteria, and served as the single point of contact for questions about portal behavior.

Technical implementation

The pages were built in ASP.NET using ASPX, C#, HTML, CSS, and JavaScript. The backend pulled Salesforce data via optimized SQL queries against the Salesforce-backed data source, partner profile fields, opportunity data, and badge eligibility. Getting query performance right was non-trivial: I applied execution plan analysis and window function tuning to keep page load times acceptable for the partner-facing pages.

Testing and delivery

I wrote the test cases myself based on the requirements I had gathered, executed them, iterated through review cycles with the UI/UX and Marketing teams, and managed the deployment through Perforce. The badge navigation system launched on schedule and enabled partner recognition across Splunk's product line portfolio.

Documentation

What I Write

Documentation is the multiplier on resolved issues. A well-written runbook means the next engineer encounters the same problem for the last time.

RunbookFor: Support & on-call engineers

CDP Data Pipeline — Ingest Timeout Escalation Guide

  • 01Symptom patterns
  • 02Upstream dependency trace (Datadog)
  • 03Timeout threshold reference
  • 04Worker allocation escalation request template
  • 05Customer communication templates

Available on request

Technical SpecFor: Engineering & TAM handoff

ASP.NET/ASPX Partner Portal — Badge Navigation Implementation

  • 01Stakeholder requirements mapping
  • 02ASPX page architecture
  • 03Salesforce data model & query design
  • 04Acceptance criteria checklist
  • 05Deployment via Perforce + IIS

Available on request

Integration GuideFor: Developers integrating Claude agents

Claude API Tool-Use — Structured Data Ingestion Patterns

  • 01Tool schema design principles
  • 02Stateless agent architecture
  • 03Validation + fallback strategy
  • 04Type-safe tool call validation
  • 05Cost optimisation checklist

Available on request

Interested in working together?

View my full engineering work or reach out directly.

View ProjectsLinkedIn ↗
DUMONT // PORTFOLIO GUIDE
>

Welcome, visitor. Want to know if Zach is the right hire?