AI System Accurately Identifies Food and Its Nutritional Value

A new AI-powered system developed by researchers at NYU Tandon School of Engineering is making accurate food tracking more accessible, offering instant nutritional analysis from a simple photo. This technology aims to assist individuals managing their weight, diabetes, and other diet-related health conditions.

Healthy food concept.

Image Credit: Tatjana Baibakova/Shutterstock.com

Presented at the 6th IEEE International Conference on Mobile Computing and Sustainable Informatics, the system employs deep-learning algorithms to analyze food images and determine nutritional content, including calories, protein, carbohydrates, and fat.

The project was driven in part by NYU’s Fire Research Group, which has long studied firefighter health and operational challenges. Research led by Prabodh Panindre and Sunil Kumar found that 73-88 % of career firefighters and 76-87 % of volunteer firefighters are overweight or obese, increasing their risk of cardiovascular issues. These findings underscored the need for an efficient, accurate dietary tracking system.

Traditional methods of tracking food intake rely heavily on self-reporting, which is notoriously unreliable. Our system removes human error from the equation.

Prabodh Panindre, Study Principal Author and Associate Research Professor, Department of Mechanical Engineering, NYU Tandon School of Engineering

Developing reliable food-recognition AI has been a longstanding challenge due to three key obstacles: food diversity, portion size estimation, and computational efficiency. The NYU Tandon team has addressed each of these issues.

The sheer visual diversity of food is staggering, unlike manufactured objects with standardized appearances, the same dish can look dramatically different based on who prepared it. A burger from one restaurant bears little resemblance to one from another place, and homemade versions add another layer of complexity.

Sunil Kumar, Study Co-Author and Global Network Professor, Mechanical Engineering, NYU Tandon School of Engineering

Portion size estimation has also posed a challenge, as accurate nutritional assessments depend on precise measurements. The NYU team developed a volumetric computation function that uses advanced image processing to determine the exact area occupied by each food item on a plate.

By correlating food area with density and macronutrient data, the system converts 2D images into precise nutritional profiles. This method eliminates the need for manual input, solving a major limitation of automated dietary tracking.

Computational efficiency was another hurdle. Many earlier models required significant processing power, often relying on cloud-based servers that introduced delays and privacy concerns. To address this, the researchers leveraged YOLOv8, a high-performance image-recognition technology, along with ONNX Runtime, an optimization tool for AI performance. The result is a web-based food identification program that runs directly in a smartphone browser, eliminating the need for a dedicated app.

In testing, the system demonstrated notable accuracy. A slice of pizza was analyzed at 317 calories, 10 grams of protein, 40 grams of carbohydrates, and 13 grams of fat—closely aligning with standard nutritional references. Similarly, it estimated idli sambhar, a South Indian dish, at 221 calories, 7 grams of protein, 46 grams of carbohydrates, and just 1 gram of fat.

One of our goals was to ensure the system works across diverse cuisines and food presentations. We wanted it to be as accurate with a hot dog — 280 calories according to our system — as it is with baklava, a Middle Eastern pastry that our system identifies as having 310 calories and 18 grams of fat,” said Panindre.

To refine the system, the researchers curated their training dataset, narrowing it from a vast collection of images to a balanced set of 95,000 food instances across 214 categories. This optimization contributed to a mean Average Precision (mAP) score of 0.7941 at an Intersection over Union (IoU) threshold of 0.5—indicating that the AI correctly identifies food items approximately 80 % of the time, even when they overlap or are partially obscured.

Currently available as a mobile-friendly web application, the system is accessible to anyone with a smartphone. While still a proof-of-concept, the researchers envision broader applications in healthcare and dietary management in the near future.

In addition to Panindre and Kumar, the study’s co-authors include Praneeth Kumar Thummalapalli and Tanmay Mandal, both master’s students in NYU Tandon’s Department of Computer Science and Engineering.

Journal Reference:

Panindre, P., et al. (2025) Deep Learning Framework for Food Item Recognition and Nutrition Assessment. IEEE Explore. doi.org/10.1109/ICMCSI64620.2025.10883519

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.