Meal Prep Mate
March 4, 2024 | Project

Technology
TL;DR
- I built an open-source meal planning app to solve common cooking challenges - helping with recipe management and meal scheduling
- Features include meal plan templates, recipe scaling, nutritional tracking, and ingredient management
- Tackled technical challenges like ingredient parsing from recipe text, receipt OCR, and GraphQL pagination
- Lessons learned: build one feature at a time, embrace iteration, and design before coding
- Currently working on mobile development plans and backend improvements
Why I built this?
Picture this: It's been another long day at work. As you race home through traffic, you begin to think, “What the heck I’m I going to eat for dinner?”. Thankfully, you remember that ambitious recipe you bookmarked over the weekend. You know, the one where you convinced yourself you'd have the energy to julienne three different vegetables and marinate meat for an hour (which, by the way, is still a frozen brick in your freezer). Suddenly all those fast food places along your route look very tempting.
However, you stay strong and drag your sorry self home for some desperate fridge-rummaging, hoping dinner will magically materialize. Eventually though, reality comes crashing down, and you finally give in to the classic standby of pizza or that trusty frozen lasagna from Costco. May your aspirations of “eating healthier” or “cooking at home” rest in peace.
Ring any bells? Yeah, I thought so. 😅
Don't get me wrong – I love cooking. But sometimes the whole meal planning process can feel like a logistical headache. You know weekly ritual of rounding up some healthy recipes that don't make you think "ugh, this again?", braving the grocery store battlefield, and somehow managing to cook everything before the vegetables turn into a science experiment in the back of your fridge. Add in kids or a busy life, and it sometimes feels like you need a personal chef just to feed yourself properly!
Turns out you can, in fact, install a personal robotic chef in your kitchen to cook your food (check it out on Shark Tank). While this is certainly an effective—and expensive—solution, many other solutions already exist for eating better, including meal prep subscription services, calories counters, step-by-step meal plans, and cookbooks. Although many of these existing solutions would undoubtedly solve my problems, they never really appealed to me.
I don’t want to pay for another subscription service, especially for food I’m not sure I’ll like. I don’t want to be restricted to a rigid dieting plan. And I certainly don’t want to get bogged down scanning and weighing ingredients for counting calories. I just want someone to look at the recipes I already have, and then tell me what to make, when to make it, and how many portions to divvy out for a healthy serving size.
Basically, I want a meal planning assistant. Something that I can use to easily plan out meal plans that fit my nutritional requirements. But I also want the plans to be efficient. For example, this planning assistant should minimize food waste by planning recipes together that share ingredients or warn me about fresh ingredients that will soon go bad. It should also help with scheduling meals and grocery shopping. Like, what if the store is out of stock on ingredients—is there another recipe that could be swapped in? Or, what if I swap or reschedule a recipe for another day than originally planned—will the ingredients stay fresh long enough?
These challenges got me thinking about what a useful meal planning tool would actually look like, something that could take the mental load off planning while still keeping cooking flexible and enjoyable. That's when I started working on a solution that would tackle these everyday problems.
Purpose, Vision, and End goal
The primary purpose of this app is simple: to reduce the headaches that come with planning and cooking meals at home. It aims to make home cooking easier and healthier by guiding you through every step, from menu planning to sending timely reminders about defrosting ingredients.
While the full vision and features for this project is quite expansive (which you can explore in detail on the project's wiki page), these are the core features:
- A smart recipe recommendation engine that generates personalized meal plans based on your goals and dietary constraints using your own recipes
- An intuitive meal planning calendar where you can schedule meals using pre-made templates or add individual recipes directly
- Advanced recipe management tools featuring ingredient grouping, rich text editing for instructions and notes, recipe sharing capabilities, and recipe scaling
- An ingredient database that accurately matches ingredients across different recipes
- Grocery receipt scanning functionality to track ingredient prices over time
- Notifications to remind you what meals to make and when to take out ingredients to defrost.
- Smart shopping lists with automatic ingredient consolidation, multi-user collaboration, and the ability to swap recipes if ingredients are unavailable
The end goal is to create an open-source project that can be self-hosted. This includes a web application, iOS and Android apps, and a containerized backend server – all designed to work seamlessly together as an integrated system.
Features
As this project is still under heavy development and features are still being implemented. (Check out the wiki to view the full feature list). The following features have been implemented at the time of this writing:
Meal plan templates.
- Create and customize meal plan templates
- When adding recipes to your meal plan, you can dynamically adjust serving sizes and portions while the app automatically recalculates ingredients and nutritional information.
- Get comprehensive nutritional insights through interactive charts that display your calorie balance, macronutrient distribution, and track specific nutrients against FDA daily recommended values or your own targets to ensure balanced meal planning.
Recipe Management
- The advanced search system allows you to find recipes based on multiple criteria including nutritional content and quantities of ingredients, helping you make the most of your existing ingredients.
- Intelligent ingredient parsing, when a recipe is created, ingredient lines are automatically parsed to identify quantities and base ingredients,which are then matched with items in the ingredient database for consistent tracking.
- Create formatted recipes with rich text editing capabilities that support bold text, italics, bullet points, and headings.
- Upload and manage photos of your recipes.
- Recipe Scaling. The ingredients and nutrition information automatically adjust as you change the scale and number of servings in a recipe.
- Organize recipe ingredients into logical groups such as marinades, dressings, or sauces, making complex recipes easier to understand and prepare.
- Each ingredient group can have its own nutrition label, which is particularly useful for components like marinades where only a portion of the prepared mixture is actually used in the final dish.
Ingredients
- Create and customize detailed expiration rules for ingredients, specifying different shelf-life durations based on storage method (refrigerator, freezer, or pantry).
- Track ingredient prices over time and across different retailers using the receipt scanning feature, which uses OCR to automatically detect and record prices from your grocery receipts.
Nutrients
- Customized RDI Targets. The application sets personalized Reference Daily Intake (RDI) values based on your age and gender, using FDA recommendations to help you meet your nutritional needs.
- Custom Nutrient Targets: Control your nutrition by setting personalized nutrient goals in three distinct ways: you can set a specific target value to aim for (like 2000 calories), establish a minimum goal to meet or exceed (such as getting at least 30g of fiber), or define an upper limit not to be exceeded (for example, staying under 2300mg of sodium). This flexibility allows you to create a nutrition plan that perfectly aligns with your dietary needs and preferences.
The Process
Design
The design process began with sketching out ideas on a whiteboard of how the app would look like. Sketching this out helped me to clarify what functionality I wanted and what pieces of data I would need to gather and store on the backend.

With a better idea of what I wanted the app to look like, I began modeling the data models for the backend. I created a basic ERD diagram of how I would organize the database tables.

I then considered the server infrastructure. I mapped out rough idea of what containers I would need to support the functionality I wanted. For example, a object storage container to store photos, a python container for natural language processing, and a message queue for processing receipt jobs. Below is the a rough sketch of the containers and how they would communicate with each other.

With the basic design and requirements cleared up, I then went looking for technologies to use that would fit my project requirements.
Technology Choices
Looking back at my technology choices for this project, I realized many of my choices were driven by what I thought would be quick, convenient, and easy to use. While this approach helped me make rapid progress early on, I discovered I had sometimes sacrificed convenience for flexibility to handle more complex functionality.
Next.js & React
I chose Next.js and React primarily because of their large community and ecosystem. Nearly every modern web development tool or package offers a React integration, making it easy to find solutions and extensions. If I were to do this project again from scratch, I'd make the same choice.
UI Framework: Shadcn/ui with Tailwind
I went with Shadcn/ui for two main reasons: the UI looked modern, clean, and simplistic, and I'm not a UI designer – it would have taken significant effort to create something equally polished from scratch. Using Shadcn/ui naturally led me to Tailwind as well.
Overall, I've been pleased with these libraries, with just a few minor complaints. With Shadcn/ui, I particularly appreciate that the components live in my own source code, letting me modify them either through passing in different props or by editing the components directly. While I like the look of Shadcn/ui, after staring at it for hours, I sometimes wonder if it fits the aesthetic I want for my app. The components can feel a bit dashboard-y or cookie-cutter – though I think that's true whenever you use an off-the-shelf UI library.
One thing I didn't fully consider when I started this project was the need for custom components. I will need a more robust calendar than what the library provides, and I suspect it could be tricky to create one that matches the design of other components.
As for Tailwind, I found it easy to get started, and I didn't have to worry about structuring CSS or class names. I appreciate how it provides a limited set of styles for typography and spacing, which helps maintain a consistent look throughout the app. While you can achieve this with regular CSS using variables/design tokens, Tailwind does make it pretty straightforward.
I did encounter some annoyances with Tailwind, like creating styles to apply across the whole app. For instance, when I wanted all my h2's to have the same styling, I faced a dilemma – I could make a component for each heading element, but that seemed like overkill. Or I could just create my own CSS class to apply to all h2’s. However, when I create my own custom class I felt I was risking the consistency of using the tailwind classes like using the same breakpoints, typography, or spacing.
Readability can also get messy once you start adding selectors, pseudo-classes, and breakpoints. I think using SCSS with mixins is a lot easier to read.
GraphQL Client: URQL
I opted for URQL because it supports file uploads, unlike the Apollo ecosystem. It's also significantly lighter weight than Apollo while maintaining similar syntax. I chose GraphQL as the server-client communication standard because I like the flexibility in querying backend endpoints for just the fields I need and it solves the n+1 problem on the server so the client doesn’t need to make multiple requests to get the data it wants.
Infrastructure: Docker
I considered using serverless functions/lambdas but ultimately decided against them because I wanted something I could host anywhere. While there are frameworks for cloud platform-agnostic solutions, I wanted the flexibility to host this anywhere, including my own home server. Plus, if I do want to host it in the cloud, I'm not locked into specific provider solutions.
The trade-off lies in scalability and cost. Cloud functions automatically scale to handle demand, while scaling with containers requires additional configuration/copmlexity and tools like Kubernetes. Hosting containers in the cloud, whether through Docker Compose or Kubernetes, tends to be more expensive since containers run continuously, unlike serverless functions that charge based on usage.
Server Technology: TypeScript & Node.js
I picked TypeScript thinking it would simplify things by using a single language across frontend and backend. I also saw it as a good opportunity to improve my TypeScript skills. This choice significantly influenced my other backend decisions. In hindsight, I probably wouldn't use TypeScript on the backend. While it is an improvement over regular JavaScript when writing code, TypeScript feels like just another layer in the already deep stack of tooling usually found on the frontend. I also thought the type system can be easily circumvented, and some of the type errors are so deeply nested they become a puzzle to decipher (thank goodness for AI). I understand TypeScript's role on the frontend – JavaScript isn't going anywhere in web development, so we work with what we’re given. But for backend development, where we have complete control over the environment, I prefer a language with built-in type safety and likely better performance from the start.
Database Layer: Prisma
This choice followed naturally from using Node.js/TypeScript, as Prisma is one of the bigger players in the JS ecosystem of ORMs. I initially loved its simplicity, but I wouldn't choose it again – partly because I wouldn't choose TypeScript, but also because I didn't like how much it influenced my codebase structure.
For instance, all database tables/ models are defined by the prisma.schema file. I would prefer that Prisma build the database based on my own classes, so I'm not left stranded if I decide to move away from Prisma later. While I could create models that wrap around the Prisma models, changes to my models wouldn't be reflected in the database until I updated the schema file. Ultimately, the schema file becomes the source of truth, and I'd rather it be my own code.
That said, I'd still recommend Prisma for simpler projects.
GraphQL Server: Graphql Yoga with Pothos
Among the available GraphQL servers and schema builders (Apollo Server, TypeGraphQL, Nexus), I chose GraphQL Yoga for its file upload support and Pothos for its Prisma integration. They both work well, but the Pothos prisma integration did create a strong coupling between my GraphQL/presentation layer and Prisma (the data layer) which I regretted and will discuss later in the challenges.
Challenges
Ingredient line parsing and ingredient matching.
One of the core challenges I faced was ingredient line parsing and matching. For the app to function properly, it needs to understand the exact quantities of ingredients in each recipe, which means parsing ingredient lines into structured data.
Take a simple example: "1 teaspoon sea salt" needs to be broken down into:
- Quantity: 1
- Unit: teaspoon
- Ingredient: sea salt
This parsing challenge isn't trivial, especially for someone like me who isn't well-versed in natural language processing. Fortunately, I discovered that the New York Times Cooking team had tackled this exact problem in 2015. They developed an ingredient tagger using a technique called Conditional Random Fields and, released both their approach and training data publicly.
When I initially approached this challenge, I expected to revise and build on top their solution from scratch. However, I found that someone else had already built upon the NYT's work and created an updated ingredient parser. I was able to use this as a foundation, containerize it, and extend it with additional functionality. One enhancement I added was the ability to track tagged text positions within ingredient lines, which is helpful scaling recipe quantities.
Receipt overlay box
After uploading and processing a receipt image with Azure's intelligent form recognition, I needed to create an interactive way to highlight receipt line items during editing. Azure got me half way there by providing the bounding box coordinates for each detected line item relative to the original image dimensions, but I needed to figure out how to draw those coordinates onto the image.
In the end, I ended up using an SVG to draw the box around the active line item. Here's how I implemented it, I’ve cut out some of the fluff to focus on the SVG portion.
ImageWithBoudingBox.tsx
interface Coordinate {
x: number;
y: number;
}
interface ImageWithBoundingBoxProps {
src: string;
boundingBox: Coordinate[]; // Top-left, Top-right, Bottom-right, Bottom-left
}
export function ImageWithBoundingBox({src, boundingBox}:ImageWithBoundingBoxProps) {
const polygonPoints= useMemo(() => {
const points = coordinates
.map((coord) => `${coord.x},${coord.y}`)
.join(" ");
return points;
})
return (
<div className="relative w-full">
<Image src={src}/>
{highlight && (
<svg
className="absolute top-0 left-0 w-full h-full"
preserveAspectRatio="none"
viewBox={`0 0 ${imageSize.width} ${imageSize.height}`}
style={{
pointerEvents: "none",
}}
>
{highlight?.boxList?.map((box, index) => (
<polygon
key={`${highlight?.name}-${index}`}
points={getPolygonPoints(box)}
className="stroke-blue-500 fill-blue-500/5"
strokeWidth="3"
/>
))}
</svg>
)}
</div>
)
}
I absolutely positioned an SVG element over the receipt image by placing them both inside a relative div. I then set the SVG's viewBox to match the original image dimensions to make the coordinate systems align perfectly. This allowed me to use the bounding box coordinates provided by Azure.
Initially, I ran into an issue where Next.js's automatic image optimization was resizing the images, causing misalignment between the SVG overlay and the actual line items. Since receipt text clarity is crucial, I disabled Next.js image optimization using the unoptimized
prop. Perhaps in future I will use the optimized images if the text quality is good enough and then use a scale-down ratio to get new coordinates.
Pagination in Graphql
When I first encountered Relay in the Pothos documentation, I initially dismissed it as something for "advanced use cases" that I didn't need. However, when I started implementing my own pagination, I quickly realized that Relay's GraphQL schema design principles were exactly what I needed. However, Relay's terminology can be confusing/intimidating at first, so I’ll explain some of the concepts that helped me to understand Relay better.
Relay Terminonolgy
Global ID’s. Global IDs ensure that entities can be uniquely identified across your entire system, regardless of type. This is particularly important when you have IDs that might overlap across different tables or systems. Although not necessary in my case, since I am using UUID’s already, Pothos creates global IDs by combining the model name and record ID with a colon, then base64 encoding the result.
For example, in my app:
- Original Recipe UUID:
77a96cb3-674e-4d00-adb9-67f716405f33
- Combined global ID:
Recipe:77a96cb3-674e-4d00-adb9-67f716405f33
- Final encoded ID:
UmVjaXBlOjc3YTk2Y2IzLTY3NGUtNGQwMC1hZGI5LTY3ZjcxNjQwNWYzMw==
Node. A node is any entity that has a global ID. In GraphQL, it's implemented as an interface:
schema.graphql
# An object with a Globally Unique ID
interface Node {
# The ID of the object.
id: ID!
}
type Recipe implements Node {
id: ID!
name: String!
}
Connection. A connection contains the list of items (called edges) you are fetching and page info. The page info is metadata about the page such as cursor positions and whether more items exist.
PageInfo. PageInfo contains pagination metadata about the result returned. It contains the following pieces of metadata:
endCursor
: Identifier for the last item in the current pagehasNextPage
: Whether more items exist after the end cursorstartCursor
: Identifier for the first item in the current pagehasPreviousPage
: Whether more items exist before the start cursor
Here's an example query from my app that demonstrates these concepts in action:
Recipes.gql
query searchRecipes($filters: RecipeFilter!, $after: String, $first: Int) {
recipes(filter: $filters, after: $after, first: $first) {
pageInfo {
hasNextPage
endCursor
}
edges {
cursor
node {
id
name
}
}
}
}
And this is the result of the query. Note, I’ve included only the first and last items in the list to save space.
Recipe Query Response
{
"data": {
"recipes": {
"pageInfo": {
"hasNextPage": true,
"endCursor": "R1BDOlM6NjU3MmYxMDEtMzMyZS00YWExLWFkMjAtNGJlYWYwYTU2NWFj",
"startCursor": "R1BDOlM6NzdhOTZjYjMtNjc0ZS00ZDAwLWFkYjktNjdmNzE2NDA1ZjMz",
"hasPreviousPage": false
},
"edges": [
{
"cursor": "R1BDOlM6NzdhOTZjYjMtNjc0ZS00ZDAwLWFkYjktNjdmNzE2NDA1ZjMz",
"node": {
"name": "3-bean Good Luck Salad With Cumin Vinaigrette",
"id": "UmVjaXBlOjc3YTk2Y2IzLTY3NGUtNGQwMC1hZGI5LTY3ZjcxNjQwNWYzMw=="
}
},
// recipes...
{
"cursor": "R1BDOlM6NjU3MmYxMDEtMzMyZS00YWExLWFkMjAtNGJlYWYwYTU2NWFj",
"node": {
"name": "Beef Enchiladas",
"id": "UmVjaXBlOjY1NzJmMTAxLTMzMmUtNGFhMS1hZDIwLTRiZWFmMGE1NjVhYw=="
}
}
]
}
}
}
Using Relay provided a standardized structure for my pagination queries, which made it significantly easier to build generic components to fetch paginated data.
Creating a generic infinite scroll component / Generic combobox search
Probably two of the most complex UI components I built were the infinite scroll and search combo boxes (a search field with a dropdown where you can select options). I wanted to make them generic because I was using the same data fetching logic in multiple places in the app.
The biggest challenge was working with Typescript to get the typing correct. I wanted a component where I can pass in the GraphQL query string and any query-specific variables, then the generic component would update the query with the pagination arguments.
It took a few attempts, but this is the props I ended up using for the pagination component. Below is the props for the infinite scroll component, the combobox props are also very similar.
InfiniteScroll.tsx
import { Fragment, HTMLAttributes } from "react";
import { TypedDocumentNode, useQuery } from "@urql/next";
export interface QueryVariables {
after?: string;
first?: number;
[prop: string]: any;
}
interface Edge<T> {
typename?: string | undefined;
cursor?: string;
node: T;
}
interface Connection<T> {
typename?: string | undefined;
edges: Edge<T>[];
pageInfo: {
hasNextPage: boolean;
endCursor?: string | null | undefined;
};
}
export interface InfiniteScrollProps<
TQuery,
TVariables extends QueryVariables,
TNode
> extends HTMLAttributes<HTMLDivElement> {
query: TypedDocumentNode<TQuery, TVariables>; // URQL query type
variables: TVariables; // Variables to pass with the query
renderItem: (item: TNode) => [JSX.Element, string | number]; //
getConnection: (data: TQuery | undefined) => Connection<TNode> | undefined; // Function to extract connection from the data
}
Query is the GraphQL query string. It uses the TypedDocumentNode helper type to correctly type the query and it’s variables.
Variables extend QueryVariables which means it expects the variables to have a after and first argument (needed for pagination)
getConnection is a simple function that returns the connection property from the relay response (usually called edges) that contains the list of items.
renderItem is a function which is passed each item from the connections list. It then returns the component for rendering the item.
And this is component used in action.
InfinteScroll
import { Card } from '@/components/Card';
import { InfiniteScroll } from '@/components/infinite_scroll/InfiniteScroll';
import { getIngredientsQuery } from '@/features/ingredient/api/Ingredient';
import { GetIngredientsQuery } from '@/gql/graphql';
interface IngredientSearchResultsProps {
search?: string;
}
type IngredientSearchItem = NonNullable<
GetIngredientsQuery["ingredients"]
>["edges"][number]["node"];
export function IngredientSearchResults({
search,
}: IngredientSearchResultsProps) {
return (
<InfiniteScroll
className="grid gap-4 grid-cols-autofit"
query={getIngredientsQuery}
variables={{ search: search }}
renderItem={(item: IngredientSearchItem) => {
return [
<Card
key={item.id}
images={[]}
placeholderUrl="/placeholder_ingredient.jpg"
vertical={false}
href={`/ingredients/${item.id}`}
>
<p>{item.name}</p>
</Card>,
item.id,
] as const;
}}
getConnection={(data) => {
if (!data?.ingredients) return undefined;
return data.ingredients;
}}
></InfiniteScroll>
);
}
Backend Architecture
The backend architecture proved to be my most significant challenge, marked by constant refactoring and cascading changes throughout the codebase. These were clear signs that the architecture needed improvement.
My initial approach was a classic three-tier architecture:
- Presentation Layer (GraphQL API/Schema)
- Service Layer (Business Logic)
- Data Layer (Database Access/Models)
The design principle was straightforward: each layer would only communicate with adjacent layers. While this approach isn't inherently flawed, my implementation suffered from tight coupling between layers, and in particular, a heavy dependence on Prisma, the ORM. This was due to a few things:
- In the GraphQL layer, Pothos (the GraphQL schema generator) has an integration with Prisma that, while initially convenient, creates portions of the database queries (fields and relations) that had to flow through all layers to get to the DB layer.
- The Prisma client also found it’s way into the service layer creating a tight coupling to Prisma. Initially, I thought the Prisma client was a good enough abstraction of the data layer. However, the Prisma queries can also get pretty verbose and take up a lot of space on the screen. I felt that this made the code less readable and added more mental overhead inside of the services.
This tight coupling between layers made testing much more difficult. Services couldn't be tested without a functioning data layer, which led to fewer unit tests and, consequently, increased potential for bugs.
What’s more, such a tight coupling and dependence on Prisma means that any future migration away from Prisma would require a complete backend rewrite. And although I am planning to move away move away from Typescript (and therefore Prisma), I did make some improvements before abandoning my current setup.
I created standardized input objects for services to handle varying data sources. For example, a recipe could come from different sources (GraphQL input, imported files) with slightly different structures. Having a single, flexible input object with optional fields significantly simplified the services.
I also tried to better encapsulate the layers by moving more Prisma client queries into the database layer with Prisma client extensions. I also moved some logic that I had initially put into the GraphQL resolvers into the service layer.
Thoughts and Lessons Learned
Build and Test One Piece at a Time
Working solo on this project, I often fell into the trap of trying to tackle multiple features simultaneously. What would start as "this will only take a second" would turn into a deep rabbit hole of interconnected changes. This approach led to untested code and made debugging significantly more challenging – when bugs appeared, it wasn't clear which of my many changes was the culprit.
Breaking down the project into smaller, manageable tasks not only made development more systematic but also helped alleviate the overwhelming feeling that comes with large projects. And, at the end of the day, when everything compiles and runs, you also have the assurance that the code is working as intended when you have good test cases, especially for the more complex and important functionality.
Embrace an Iterative Process
I think it’s important to resist the temptation of seeking the perfect solution upfront. For instance, I'd write backend code while imagining its future use cases, only to later discover while writing another service or during frontend development that my assumptions were wrong. This resulted in unnecessary complexity and wasted effort.
If I were to do this project again, I would work first in vertical slices, like implementing a single entity/model and it’s CRUD operations across the full stack. And then add features that I know I'll need across the entire app, like filtering, pagination, error handling, logging, etc. I think this approach helps catch integration issues early, exposes hidden complexity in seemingly simple functionality, and prevents over-engineering
Design First, Code Later
One of my biggest time sinks was designing the UI directly in code, constantly tweaking CSS and moving elements around to get the right look. I initially thought learning Figma would be too time-consuming, but I was wrong. The time invested in learning a design tool would have paid off significantly by allowing for easy iteration on designs and clear visual targets for implementation.
Current Status
I’m currently regrouping on this project. I’m evaluating the best way to make this app mobile (i.e., Flutter, PWA, or just a mobile website). I’m also giving the backend a revamp as I don’t want to continue putting more time into something I know I want to change in the future.