ai engineeringcrud apimental modelagent architecturedeveloper fundamentals

The Todo App Is Dead. Long Live the Todo App.

Brijesh Agarwal
Brijesh AgarwalCo-founder & CTO
Mar 12, 2026·7 min read
The Todo App Is Dead. Long Live the Todo App.

TL;DR

  • The todo app was the first application most developers ever built. It taught CRUD, REST APIs, database schema design, and frontend state management.
  • That mental model assumed the user does everything: creates tasks, sets priorities, checks them off.
  • The new todo app creates tasks from meeting transcripts, infers completion from real-world signals (a Swiggy order, a calendar event, a Slack message), and has an agent decide what matters next.
  • The fundamental abstraction has shifted from "user tells app what to do" to "app understands what the user needs to do."
  • Every developer, junior or senior, needs to rebuild their mental model from scratch. The skills that made you effective in 2020 are table stakes now, not differentiators.

I built my first todo app in 2014. Express.js, MongoDB, a React frontend with a form and a checkbox list. I remember the exact moment the POST request hit the database and the task appeared on screen. It felt like I understood how the internet worked.

That application taught me HTTP methods, request-response cycles, database schemas, CRUD operations, authentication, and deployment. Every developer I know has a version of this story. The todo app was our "Hello, World" for real software.

Twelve years later, I'm watching that entire mental model become obsolete.

What the classic todo app actually taught us

Strip away the nostalgia and look at what the todo app was really teaching. It was a course in explicit, user-driven state management:

The user creates a task. You learn POST requests, form validation, database writes.

The user reads their tasks. You learn GET requests, query parameters, pagination, frontend rendering.

The user updates a task. You learn PUT/PATCH, optimistic UI updates, conflict resolution.

The user deletes a task. You learn DELETE, cascading effects, confirmation UIs.

The user checks a box. You learn state transitions, boolean flags, timestamps.

Every interaction followed the same pattern: the human decides, the human acts, the system records. The application was a dumb ledger. It stored what you told it and showed it back to you. The entire architecture assumed that the user was the only source of truth about what needed to happen and whether it had happened.

That assumption held for a long time. It doesn't hold anymore.

The mental model that broke

Here is a real scenario. I need to order a birthday cake for my daughter. In the classic todo app, I would:

  1. Open the app
  2. Type "Order birthday cake"
  3. Maybe set a due date
  4. Later, open Swiggy, place the order
  5. Go back to the todo app
  6. Check the box

Six steps. Four context switches. The todo app contributed nothing except reminding me of something I already knew I needed to do.

Now consider what an AI-native todo app could do:

  1. I mention the birthday in a meeting. The app picks it up from the transcript and creates the task automatically.
  2. It checks my calendar, sees the party is Saturday, and sets the deadline for Thursday delivery.
  3. On Thursday morning, it nudges me. I tell it to go ahead and order. Or maybe it just does, because I pre-approved a spending threshold.
  4. When the Swiggy order confirmation hits my email, the app marks the task complete. No checkbox. No context switch.

The difference is not that the second version is "smarter." The difference is that it operates on a fundamentally different abstraction. The user is no longer the sole actor. The application perceives, infers, and acts.

From CRUD to Perceive-Decide-Act

The classic todo app architecture was straightforward:

User -> Form -> API -> Database -> Response -> UI

Every arrow is a direct, synchronous, user-initiated action. The developer needed to understand HTTP, SQL, and rendering. That was the job.

The architecture of an AI-native todo app looks nothing like this:

Signals (calendar, email, voice, meetings, apps)
    -> Perception layer (extract intent, entities, deadlines)
    -> Agent (prioritize, schedule, delegate, act)
    -> Feedback loop (confirm completion from real-world signals)
    -> UI (surfaces decisions, asks for approval when uncertain)

The developer building this needs a completely different skill set:

Signal ingestion. How do you connect to a calendar API, an email inbox, a meeting transcript service, a food delivery webhook? This is not one REST endpoint. It is dozens of event streams with different auth models, rate limits, and data formats.

Intent extraction. When someone says "we should probably get a cake" in a meeting, is that a task? What about "remind me to check on the deployment"? You need a model that can distinguish actionable intent from casual conversation, and it needs to be right often enough that users trust it.

Agent-based prioritization. The classic todo app showed tasks in the order you created them, or maybe let you drag them around. An AI-native app needs to reason about urgency, dependencies, context, and the user's current focus. This is not a sort function. It is a decision system.

Completion inference. A checkbox is trivial to implement. Figuring out that a task is done because a downstream system emitted a signal? That requires mapping tasks to real-world outcomes, monitoring those outcomes, and handling ambiguity. The cake order might get cancelled. The deployment might roll back. "Done" is no longer a boolean you flip. It is a judgment the system makes.

What stays the same

I'm not saying throw out everything you learned from the classic todo app. Some foundations carry forward:

Data modeling still matters. You still need schemas. But instead of a tasks table with title, completed, and user_id, you need to model intents, signals, confidence scores, and provenance chains. Where did this task come from? What signal marked it done? How confident is the system?

APIs still matter. You still build endpoints. But the primary consumer is not a frontend form. It is an agent that needs to read context, make decisions, and trigger actions across multiple services.

State management still matters. But state is no longer a list of objects with boolean flags. It is a graph of tasks, signals, decisions, and confidence levels that changes asynchronously from multiple sources.

The muscle memory transfers. The mental model does not.

The new fundamentals

If I were starting my career today, the todo app I'd build would teach me a different set of fundamentals:

Event-driven architecture over request-response. The classic todo app was synchronous. Click, wait, render. The new one is event-driven. Signals arrive from anywhere, at any time. You need to think in streams, not in requests.

Probabilistic systems over deterministic CRUD. When the user clicks "save," you know they meant to save. When the system infers a task from a meeting transcript, you have a confidence score, not a certainty. You need to design for "probably right" and handle the cases where you're wrong.

Multi-system orchestration over single-database operations. The classic todo app talked to one database. The new one talks to a calendar, an email provider, a delivery service, a meeting transcription tool, and an LLM. Each has its own failure modes, latency profile, and rate limits.

Agentic decision-making over passive storage. The app does not just store and retrieve. It makes decisions: what to prioritize, when to nudge, when to act autonomously, when to ask for permission. You need to think about trust boundaries, escalation policies, and graceful degradation when the agent is wrong.

Why experienced engineers need to restart too

This isn't just a message for bootcamp graduates. I have been building production systems for over a decade. My instinct when someone says "build a todo app" is still to reach for a REST framework and a database. That instinct is the problem.

The mental model I built over years of shipping CRUD applications actively gets in the way. I think in endpoints when I should think in events. I think in tables when I should think in signals. I think in explicit user actions when I should think in inferred intent.

Unlearning is harder than learning. A junior developer with no CRUD muscle memory might actually pick up the new paradigm faster than I would, because they don't have to fight their own instincts.

I'm not saying my experience is worthless. Understanding distributed systems, failure modes, data consistency: these compound. But the frame through which I apply that experience needs a hard reset.

The real lesson of the todo app

The todo app was never really about todos. It was about teaching developers the dominant interaction pattern of their era: user-initiated, synchronous, CRUD-based.

That era is over. The dominant interaction pattern now is signal-driven, probabilistic, and agentic. The todo app of 2026 needs to teach developers that pattern.

So here's my challenge, to myself and to every developer reading this: go back to day zero. Build a todo app again. But this time, the user never types a task. The user never checks a box. The app figures it out.

If you can build that, you understand how software works now. If you can't, you're still building for 2015.

The todo app is dead. Long live the todo app.

Tags:ai engineeringcrud apimental modelagent architecturedeveloper fundamentals