Configure and manage data quality tests, profiling, alerts, and incidents in OpenMetadata. Use when setting up quality tests, configuring profiler workflows, creating observability alerts, or triaging data quality incidents.
Test Failure
↓
New Incident Created
↓
Acknowledged (ack)
↓
Assigned to Owner
↓
Investigation
↓
Root Cause Documented
↓
Resolved (with reason)
Incident States
State
Description
New
Incident just created
Ack
Acknowledged, under review
Assigned
Assigned to specific person/team
Resolved
Issue fixed, incident closed
Managing Incidents
Acknowledge
Navigate to Incident Manager
Find new incident
Click Ack to acknowledge
Incident moves to acknowledged state
Assign
Select acknowledged incident
Click Assign
Search for user or team
Add assignment notes
Task created for assignee
Document Root Cause
Open incident details
Click Root Cause
Document:
What went wrong
Why it happened
How it was discovered
Save for future reference
Resolve
Open incident
Click Resolve
Select resolution reason:
Fixed - Issue corrected
False Positive - Test was wrong
Duplicate - Same as another incident
Won't Fix - Accepted as-is
Add resolution comments
Confirm resolution
Resolution Workflow
1. Failure Notification → System alerts on test failure
2. Acknowledgment → Team member confirms awareness
3. Assignment → Routes to knowledgeable person
4. Status Updates → Assigned team communicates progress
5. Resolution → All stakeholders notified of fix
Historical Analysis
Past incidents serve as a troubleshooting handbook:
Review similar scenarios
Access previous resolutions
Learn from patterns
Improve test coverage
Lineage for Impact Analysis
Lineage in Data Quality Context
Use lineage to understand:
Which downstream tables are affected by issues
Which upstream sources might be the root cause
Impact radius of data quality problems
Exploring Lineage
Navigate to table's Lineage tab
View upstream sources (data origins)
View downstream targets (data consumers)
Click nodes for quality status
Lineage Configuration
Setting
Range
Purpose
Upstream Depth
1-3
How far back to trace
Downstream Depth
1-3
How far forward to trace
Nodes per Layer
5-50
Max nodes displayed
Lineage Layers
Layer
Quality Use Case
Column
Track field transformations
Observability
See test results on each node
Service
Cross-system impact analysis
Observability Layer
Enabling the observability layer shows:
Test pass/fail status on each node
Failing tests propagate visual indicators
Quick identification of problem sources
Profiler and Test Scheduling
Cron Expressions
Expression
Schedule
0 0 * * *
Daily at midnight
0 */6 * * *
Every 6 hours
0 0 * * 0
Weekly on Sunday
0 0 1 * *
Monthly on 1st
*/30 * * * *
Every 30 minutes
Recommended Schedules
Workload Type
Profiler
Tests
Batch (Daily)
Daily after load
Daily after load
Streaming
Every 6 hours
Every hour
Critical
Hourly
Every 15 minutes
Archive
Weekly
Weekly
Managing Schedules
Navigate to Settings → Services → [Service]
Go to Ingestion tab
View/edit scheduled workflows:
Metadata ingestion
Profiler
Test suites
Best Practices
Test Coverage
Start with critical tables - Tier1 assets first
Cover basics first:
Null checks on required columns
Uniqueness on primary keys
Range checks on numeric fields
Add business rules - Custom SQL for domain logic
Test incrementally - New rows, not full table
Profiler Configuration
Sample appropriately - 10-50% usually sufficient
Exclude large columns - Skip LOBs and JSON
Schedule off-peak - Avoid production impact
Timeout appropriately - Set realistic limits
Alert Management
Avoid alert fatigue - Start with critical tests
Route appropriately - Right team for right issues
Include context - Link to asset and test details
Set severity levels - Not all failures are equal
Incident Response
Acknowledge quickly - Show awareness
Document thoroughly - Future you will thank you
Communicate status - Keep stakeholders informed
Learn from incidents - Improve tests and processes