Resources

The bug reporting & defect management glossary.

Short, vendor-neutral definitions of every term you'll see in modern QA workflows - from MTTR and severity levels to AI triage, PII redaction, and session replay.

Bug Reporting

5 terms

Bug reporting

a.k.a. Issue reporting · Defect reporting

#bug-reporting

The process of capturing and communicating a software defect so an engineer can reproduce and fix it.

Bug reporting is the practice of capturing what a user experienced - what they saw, what they did, and what the system did wrong - and turning it into a structured ticket an engineer can act on. A good bug report includes the steps to reproduce, the expected vs actual behaviour, the environment (browser, OS, app version), and supporting artefacts like a screenshot, console log, and network trace. Modern tools like Oneclik automate the artefact capture so users only have to click once.

PII redaction

a.k.a. Data masking · Field masking

#pii-redaction

Automatically masking personally identifiable information (names, emails, IBANs, auth tokens) from bug-report artefacts before they leave the browser.

PII redaction is the practice of stripping personally identifiable information - names, email addresses, IBANs, auth tokens, full credit-card numbers - from screenshots, DOM snapshots, console output, and network captures before they are uploaded. For regulated industries (fintech, insurance, healthtech) it is the difference between a usable bug report and a compliance incident. Oneclik redacts by CSS selector, regex, and heuristic, and offers EU data residency for additional containment.

Reproduction steps

a.k.a. Repro steps · Steps to reproduce

#reproduction-steps

The numbered sequence of actions an engineer should take to reproduce a bug on their own machine.

Reproduction steps are the ordered actions someone must take to make a bug appear: navigate here, click that, type this, observe that. Without them, an engineer's first hour is spent guessing - and most 'cannot reproduce' tickets die at this step. AI tools can draft reproduction steps from a session replay, but a human user's one-line description is still the ground truth.

Bug report template

#bug-report-template

A reusable structure for bug reports - typically title, environment, steps to reproduce, expected vs actual, and attachments.

A bug report template is the structured form your team uses for every defect: a clear title, the environment (browser, OS, app version, user role), numbered reproduction steps, expected vs actual behaviour, and attachments (screenshot, console, network). Templates exist because un-templated bug reports are 5× more likely to be returned to the reporter for clarification. Oneclik fills the template automatically from captured context, so the user only writes the title and observation.

User feedback widget

a.k.a. In-app feedback widget · Bug report widget

#user-feedback-widget

A small UI element embedded inside a web app that lets users report bugs or send feedback without leaving the page.

A user feedback widget is a JavaScript-embedded UI component - usually a floating button - that lets a real user report a bug or send feedback inside the product, without context-switching to email or a support portal. The best widgets capture artefacts (screenshot, console, network) silently when the user clicks, so the support engineer never has to ask follow-up questions. Oneclik is, at its core, a widget plus an AI-triage pipeline plus integrations to Jira and Linear.

Defect Management

3 terms

Defect management

#defect-management

The end-to-end process of tracking software defects from discovery through triage, prioritisation, fix, verification, and closure.

Defect management is the discipline of running the full lifecycle of a software bug: capture, triage, severity classification, ownership assignment, prioritisation against the backlog, fix, verification, and closure. Where bug reporting is the act of logging a single issue, defect management is the system that makes sure issues do not get lost, duplicated, or misprioritised. Oneclik covers the capture and AI-triage stages and feeds clean tickets into the defect-management system of record (Jira, Linear, GitHub Issues).

Bug triage

a.k.a. Issue triage · Defect triage

#triage

The decision-making step where new bug reports are classified by severity, owner, and priority before going into the backlog.

Triage is the decision-making step between a bug being reported and an engineer being asked to fix it. The triager decides whether the report is reproducible, what severity it carries, which team owns it, whether it duplicates an existing issue, and where it sits on the priority list. Manual triage is one of the most expensive activities in QA - Oneclik uses AI to draft a recommended severity, owner, and duplicate-link before a human approves the ticket.

Severity levels (S1–S4)

a.k.a. Bug severity · Priority levels

#severity-levels

A 4-tier classification of defect impact, where S1 is a production outage and S4 is a cosmetic issue.

Severity levels (commonly S1–S4 or P0–P3) classify how badly a defect affects users. S1/P0 means a production outage or data loss, S2/P1 a major feature broken for many users, S3/P2 a workaround exists, and S4/P3 a cosmetic or minor issue. Oneclik's AI triage suggests a severity based on user impact signals (auth flow vs admin tool, % of sessions affected) which a human approves.

AI in QA

2 terms

AI bug triage

a.k.a. LLM bug triage · AI-assisted triage

#ai-triage

Using a large language model to read raw bug-report context and propose a draft title, severity, owner, and duplicate-link for human approval.

AI bug triage applies a large language model to the noisy raw context of a bug report - screenshot, console errors, network trace, user message - and produces a structured draft: a clean title, a reproduction summary, a suggested severity, a likely owning team, and a duplicate-link to similar past tickets. The model is not a replacement for a human triager; it shifts the triager's job from writing tickets to reviewing them, which is typically 5–10× faster. Oneclik runs AI triage on every report before it lands in Jira or Linear.

Duplicate bug detection

a.k.a. Issue clustering · Bug deduplication

#duplicate-detection

Identifying that a new bug report describes the same defect as an existing ticket, usually using vector embeddings of report content.

Duplicate detection identifies when a new report describes a defect already tracked in the backlog. Modern tools embed every report (title, console error fingerprint, stack trace) into a vector and compare against the last 90 days of tickets; high-similarity matches are surfaced to the triager rather than auto-merged. The win is not a smaller backlog - it is engineers seeing 'this is the third report' as a real impact signal.

QA & Testing

5 terms

Session replay

a.k.a. Session recording · User session replay

#session-replay

A short reconstructed video of the DOM, mouse, and input events leading up to a bug, played back from captured rrweb-style data.

Session replay is a reconstructed playback of what happened in the user's browser - DOM mutations, clicks, scrolls, form input, and network activity - assembled from a lightweight recording library (rrweb is the de-facto standard). Unlike a video, a session replay is a deterministic re-render, so engineers can inspect the exact DOM state and console at every frame. Oneclik attaches the last 30 seconds of session replay to every bug report by default, so reproduction is no longer guesswork.

Console log capture

#console-log-capture

Recording the browser's console output (errors, warnings, logs) and attaching it to a bug report.

Console log capture intercepts calls to the browser's console API (log, warn, error) and the global error and unhandledrejection events, then attaches the buffered output to a bug report. It is the highest-signal artefact in a frontend bug report - a stack trace usually points straight to the failing component. Oneclik captures console output silently in the background and ships it with every report.

Network capture (HAR)

a.k.a. HAR file · Network log

#network-capture

A structured log of every HTTP request the browser made (URL, method, status, timing), usually exported as a HAR file.

Network capture records every HTTP request the browser made - URL, method, status code, request and response headers, timing, and (optionally) bodies - in the HAR format defined by the W3C. It is essential for diagnosing failed API calls, slow endpoints, and CORS or auth issues. Oneclik captures a sanitised network log around the moment of a bug report, with sensitive headers redacted by default.

Regression bug

#regression

A defect that re-introduces a previously fixed problem, usually caused by a recent code change.

A regression is a bug in functionality that previously worked, typically introduced by a recent code change, dependency upgrade, or refactor. Regressions are especially expensive because they erode user trust - 'this used to work' is the hardest defect to defend. AI triage tools flag likely regressions by comparing a new error fingerprint against the timeline of recent deployments.

Shift-left testing

#shift-left-testing

Moving testing and bug detection earlier in the software development lifecycle, ideally to the developer's local machine.

Shift-left testing is the practice of catching defects as early as possible - in the IDE, in CI, in preview deployments - rather than after release. The economic argument is well-established: a bug caught in development is roughly 10× cheaper than the same bug caught in production. In-product reporting tools like Oneclik complement shift-left by making the unavoidable production bugs cheap to triage.

Metrics

2 terms

MTTR (Mean Time To Resolve)

a.k.a. Mean Time To Repair · Mean Time To Recovery

#mttr

The average elapsed time from when a defect is reported to when its fix is shipped to production.

MTTR - Mean Time To Resolve - is the average elapsed time between a defect being opened and its fix being deployed to production. It is one of the four DORA metrics and the single best proxy for the health of an engineering team's defect-management workflow. Teams that automate context capture and AI triage (the front of the funnel) typically cut MTTR by 40–60% because engineers stop spending half their time asking 'can you reproduce this?'.

MTTD (Mean Time To Detect)

#mttd

The average elapsed time between a defect being introduced and it being noticed by a user, monitor, or QA process.

MTTD - Mean Time To Detect - measures how long a defect lives in production (or in a build) before anyone notices it. Lower MTTD means faster feedback loops and smaller blast radius per incident. One-click in-product reporting tools like Oneclik shorten MTTD by removing the friction that stops users from reporting issues.

Bugs, ideas & feedback - one click away.

Install Oneclik in two minutes. Free forever for small teams.