Manual Integration
Full SDK integration with custom status updates and error handling.
Full SDK integration for cron monitoring with custom status updates and error handling. This approach gives you maximum control over what gets monitored and how.
✅ Perfect for:
- Custom cron logic not using standard libraries
- Need fine-grained control over monitoring
- Want to customize error handling and reporting
- Serverless functions or cloud-based scheduling (AWS Lambda, Vercel, etc.)
- Complex job workflows with multiple steps
❌ Consider other approaches if:
- Using
node-cron
,cron
, ornode-schedule
→ Try Automatic instead - Just need basic failure notifications → Try UI Setup instead
- Managing many monitors programmatically → Try Advanced Setup instead
- Install and configure the Sentry SDK
- Create monitors in Sentry (or use programmatic creation)
First, create your monitor in Sentry:
- Go to Alerts → Create Alert → Cron Monitor
- Configure your monitor settings (name, schedule, timezone)
- Note the monitor slug for use in your code
Copied
import * as Sentry from "@sentry/node";
// Wrap your job function
Sentry.withMonitor(
"my-monitor-slug", // Monitor slug from Sentry
async () => {
console.log("Starting data processing...");
// Your job logic here
await processData();
await generateReports();
console.log("Job completed successfully");
},
);
Requirements: SDK version 7.76.0
or higher.
Copied
import * as Sentry from "@sentry/node";
async function runMyJob() {
// 🟡 Notify Sentry your job is starting
const checkInId = Sentry.captureCheckIn({
monitorSlug: "my-monitor-slug",
status: "in_progress",
});
try {
console.log("Starting data processing...");
// Your job logic here
await processData();
await generateReports();
// 🟢 Notify Sentry your job completed successfully
Sentry.captureCheckIn({
checkInId,
monitorSlug: "my-monitor-slug",
status: "ok",
});
console.log("Job completed successfully");
} catch (error) {
// 🔴 Notify Sentry your job failed
Sentry.captureCheckIn({
checkInId,
monitorSlug: "my-monitor-slug",
status: "error",
});
// Also capture the error details
Sentry.captureException(error);
throw error; // Re-throw to maintain normal error handling
}
}
// Schedule your job (with your preferred method)
setInterval(runMyJob, 60 * 60 * 1000); // Every hour
Note: Heartbeat monitoring only detects missed runs, not runtime timeouts.
Use your preferred scheduling method:
Copied
// Run every hour
setInterval(
() => {
Sentry.withMonitor("hourly-job", async () => {
await runHourlyTask();
});
},
60 * 60 * 1000,
);
Copied
async function runJobWithContext() {
const checkInId = Sentry.captureCheckIn({
monitorSlug: 'contextual-job',
status: 'in_progress',
});
try {
const records = await fetchRecords();
for (const record of records) {
// Set context for each record
Sentry.withScope(scope => {
scope.setTag('record_id', record.id);
scope.setContext('record_data', {
type: record.type,
size: record.data.length,
});
// Process record - any errors will include this context
await processRecord(record);
});
}
Sentry.captureCheckIn({
checkInId,
monitorSlug: 'contextual-job',
status: 'ok',
});
} catch (error) {
// Error already has context from withScope
Sentry.captureCheckIn({
checkInId,
monitorSlug: 'contextual-job',
status: 'error',
});
throw error;
}
}
Copied
async function runJobWithPartialFailures() {
const checkInId = Sentry.captureCheckIn({
monitorSlug: "partial-failure-job",
status: "in_progress",
});
let hasErrors = false;
const results = [];
try {
const tasks = await getTasks();
for (const task of tasks) {
try {
const result = await processTask(task);
results.push({ success: true, task: task.id, result });
} catch (error) {
hasErrors = true;
results.push({
success: false,
task: task.id,
error: error.message,
});
// Report individual task failures without failing the whole job
Sentry.captureException(error, {
tags: { task_id: task.id },
extra: { partial_failure: true },
});
}
}
// Report overall job status
if (hasErrors) {
// Job completed but with some failures
Sentry.addBreadcrumb({
message: `Job completed with ${results.filter((r) => !r.success).length} failures`,
level: "warning",
});
}
Sentry.captureCheckIn({
checkInId,
monitorSlug: "partial-failure-job",
status: hasErrors ? "error" : "ok",
});
} catch (error) {
// Complete job failure
Sentry.captureException(error);
Sentry.captureCheckIn({
checkInId,
monitorSlug: "partial-failure-job",
status: "error",
});
throw error;
}
return results;
}
- Managing many monitors? See Advanced Setup for programmatic management
- Want easier setup? Consider Automatic if you're using supported libraries
- Need help? Check Troubleshooting
- Complex workflows? Explore Sentry's Performance Monitoring integration
Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").