I’ve been running OpenClaw on my laptop for about a week and a half. It’s been useful, but I hit a practical problem quickly: as the number of automations grew, I lost a clean view of what was running, when it was running, and what failed versus what quietly succeeded.

This post is a simple build log: what I set up, why it helped, and the schedule I’m actually using.

Why I needed a schedule (the “no glue” problem)

I was feeding OpenClaw a lot of information over time, but I didn’t have any “glue” to track what I already gave it, what was running when, and what the system was doing day-to-day. I wanted a simple visual way to answer: What runs daily vs nightly vs weekly? How often do things run? What happens when something fails? Where do logs/status live? How do I keep backups safe?

The issue wasn’t model capability. It was coordination.

The mental model

The system is intentionally simple:

Task buckets → Central Cron-Log DB → Notification Layer (on failure)

High-level workflow map showing task buckets feeding a central cron-log database and failure notifications

High-level architecture. Task buckets feed a Central Cron-Log DB, and failures route to notifications.

My actual schedule (what runs when)

Detailed ET schedule for overnight, daytime, weekly-monthly, and hourly backup workflows

Detailed schedule in ET, including overnight, daytime, weekly/monthly, and hourly backup workflows.

Central Cron-Log DB: the “single source of truth”

This is the part that made the whole setup calmer.

It consolidates:

It tracks:

Rules I follow: don’t log secrets, and keep logs lean enough to debug without creating noise.

{
  "jobId": "morning-briefing",
  "schedule": "0 7 * * *",
  "tz": "America/New_York",
  "state": {
    "nextRunAt": "2026-02-22T12:00:00Z",
    "lastRunAt": "2026-02-21T12:00:03Z",
    "lastStatus": "ok",
    "consecutiveErrors": 0,
    "lastError": null
  }
}
2026-02-21T12:00:03Z job=morning-briefing status=ok durationMs=41234
2026-02-21T13:00:02Z job=health-check status=error error="timeout" retry=1 nextRun=2026-02-21T14:00:00Z

Failure handling + notifications

Operating philosophy: fail loud, succeed quiet.

Success shouldn’t spam. Failures should contain enough context to act quickly via Telegram/Email.

A useful failure alert should include:

Backups: Git auto-sync + local DB snapshots

I use two backup layers:

  1. Hourly Git Auto-Sync / Backup for versioned workspace artifacts.
  2. Local on-device backups for some databases I’ve put together.

Why both? Git gives recoverable config/file history. Local DB snapshots give data-level restore points.

Safety tips:

What I learned after 2 weeks

What I’m adding next

Appendix: Full schedule (copy/paste friendly)

Overnight

During the Day

Weekly / Monthly

Backups

Sandeep Aulakh

Director of Technical Architects at Salesforce, where he leads AI and Data Cloud adoption for Fortune 500 enterprises. When he's not explaining technology, you'll find him tinkering with new tools, running, rowing, or trying yet another type of coffee.

He writes about enterprise AI, building with agents, and what actually works vs. what just demos well. Find him on LinkedIn.