Troubleshooting Guide
This guide helps you diagnose and resolve common issues with Tekton Pruner.
Common Issues
1. Resources Not Being Pruned
Symptoms
- PipelineRuns or TaskRuns remain after their expected deletion time
- No pruning activities visible in logs
Possible Causes and Solutions
- Configuration Not Applied
# Check if config exists
kubectl get configmap tekton-pruner-default-spec -n tekton-pipelines
# Verify configuration content
kubectl get configmap tekton-pruner-default-spec -n tekton-pipelines -o yaml
- Controller Not Running
# Check controller status
kubectl get pods -n tekton-pipelines -l app=tekton-pruner-controller
# Check controller logs
kubectl logs -n tekton-pipelines -l app=tekton-pruner-controller
- Incorrect Resource Labels
# Check resource labels
kubectl get pipelineruns --show-labels
2. Unexpected Resource Deletion
Symptoms
- Resources being deleted earlier than expected
- Too many resources being pruned
Possible Causes and Solutions
- Check TTL Configuration
# Verify TTL settings in config
kubectl get configmap tekton-pruner-default-spec -n tekton-pipelines -o jsonpath='{.data.global-config}'
- Check History Limits
- Ensure history limits are set appropriately
- Verify resource completion status is being detected correctly
3. Permission Issues
Symptoms
- Error messages about RBAC in controller logs
- Unable to delete resources
Solutions
- Verify RBAC Configuration
# Check ClusterRole
kubectl get clusterrole tekton-pruner-controller
# Check ClusterRoleBinding
kubectl get clusterrolebinding tekton-pruner-controller
# Check ServiceAccount
kubectl get serviceaccount tekton-pruner-controller -n tekton-pipelines
- Apply Missing RBAC Rules
kubectl apply -f config/200-clusterrole.yaml
kubectl apply -f config/201-clusterrolebinding.yaml
Collecting Debug Information
1. Controller Logs
# Get recent logs
kubectl logs -n tekton-pipelines -l app=tekton-pruner-controller --tail=100
# Get logs with timestamps
kubectl logs -n tekton-pipelines -l app=tekton-pruner-controller --timestamps=true
# Follow logs in real-time
kubectl logs -n tekton-pipelines -l app=tekton-pruner-controller -f
2. Resource Status
# List PipelineRuns with details
kubectl get pipelineruns -o wide
# Get specific PipelineRun details
kubectl describe pipelinerun <pipelinerun-name>
3. Configuration Validation
# Export current config
kubectl get configmap tekton-pruner-default-spec -n tekton-pipelines -o yaml > current-config.yaml
# Compare with default config
diff current-config.yaml config/600-tekton-pruner-default-spec.yaml
Note: For detailed information about ConfigMap validation, including webhook validation rules, required labels, and common validation errors, see the ConfigMap Validation guide.
Best Practices for Troubleshooting
-
Start with Controller Logs
- Check for error messages
- Look for pruning activity
- Verify configuration is being read
-
Verify Resource State
- Check resource status
- Verify labels and annotations
- Confirm completion timestamps
-
Test with Simple Configuration
- Start with basic global settings
- Add complexity gradually
- Test one feature at a time
-
Monitor Resource Changes
- Watch resources in real-time
- Track deletion patterns
- Verify pruning behavior
Getting Help
If you’re still experiencing issues:
- Search existing GitHub issues
- Collect relevant logs and configuration
- Open a new issue with:
- Clear description of the problem
- Steps to reproduce
- Expected vs actual behavior
- Relevant logs and configuration
- Kubernetes and Tekton versions
Feedback
Was this page helpful?