GitLab CI — alert when a pipeline runs longer than N minutes
A GitLab pipeline sometimes hangs in running for an hour+ (runner lost connectivity, or a job sits in curl with no timeout) — you only notice when a merge request 'isn't moving'.
Recipe
#!/usr/bin/env bash
# /etc/cron.d/gitlab-stuck
# */10 * * * * root /opt/gitlab-stuck.sh
PROJECT_ID=${GITLAB_PROJECT_ID}
TOKEN=${GITLAB_TOKEN}
HOST=${GITLAB_HOST:-gitlab.com}
MAX_MIN=${MAX_MIN:-30}
PIPES=$(curl -fsS -H "PRIVATE-TOKEN: $TOKEN" \
"https://$HOST/api/v4/projects/$PROJECT_ID/pipelines?status=running&per_page=50")
NOW=$(date -u +%s)
STUCK=0
echo "$PIPES" | jq -c '.[]' | while read p; do
ID=$(echo "$p" | jq -r .id)
CREATED=$(echo "$p" | jq -r .created_at)
CTS=$(date -ud "$CREATED" +%s)
AGE_MIN=$(( (NOW - CTS) / 60 ))
if [ "$AGE_MIN" -gt "$MAX_MIN" ]; then
STUCK=$((STUCK + 1))
echo "stuck pipeline #$ID — ${AGE_MIN}m running"
fi
done
if [ "$STUCK" -gt 0 ]; then
curl -fsS "$HEARTBEAT_URL" --data "stuck=$STUCK"
exit 2
fi
echo "OK (no stuck pipelines)"
Same thing in Enterno.io
Wire this to an Enterno heartbeat — alerts straight to Telegram and a history of 'when did a pipeline last hang' without building your own Grafana dashboard.
Related recipes
A `schedule:`-driven workflow sometimes silently stops (forked repos, expired tokens, GH outages). You only realise a week later when backups are missing.
The Jenkins queue grows — an agent went away, label mismatch, or executors are saturated. PR checks hang, devs start chat-pinging "what is up with CI?".
Ensure your site returns 2xx every minute, alert to Slack/Telegram on failure.