All case studies
Research & Healthcare Metabolic Intelligence Lab

AI Agents for Precision Medicine

Development of an LLM abstraction layer to accelerate AI agent research and development in precision medicine and personalized nutrition.

October 2024
-70%
Time-to-experiment
250+
Supported LLMs
0
LLM switch effort
Open Source
Release

The Challenge

Research teams needed to rapidly experiment with different LLMs and providers, but each change required significant code rewrites and re-optimization of prompts and contexts.

The Solution

Creation of a software abstraction layer enabling provider and LLM-agnostic AI agent development, with hybrid AWS and Neo Cloud infrastructure for flexible hosting.

Client Profile

The Metabolic Intelligence Lab is a research center at the Catholic University of the Sacred Heart, led by a team of excellent researchers with numerous publications featuring extremely high cite scores.

The laboratory conducts cutting-edge research in the metabolism sector, aiming to understand its mechanisms from cellular dynamics to the functioning of the entire organism.

Our company established a collaboration with this center of excellence in 2022, supporting the development of predictive models for building a true metabolic avatar of patients through quantum machine learning algorithms.

Over the years, the collaboration has extended to our partner network at the Digital Innovation Hub in Rome, developing numerous collaborative projects that have given rise to innovative products in the frontier sector of precision medicine.

The Problem

One of the most important industrial research projects we’re involved in concerns the development of an AI-powered platform to support nutritionist doctors and their patients.

The project’s ultimate goal is to facilitate initial patient anamnesis processes and support them in properly managing their food diary, a fundamental tool for collecting concrete data on actual daily nutrient intake.

The Food Diary Challenge

Manual diary compilation, even when assisted by mobile or web applications, is one of the most critical aspects of patient management.

Except for cases of actual metabolic diseases, most patients with simple fitness needs or minor medical issues quickly lose interest in the prescribed dietary protocol precisely due to lack of consistency and time in updating their food diary.

Recording dishes, ingredients, and quantities is a tedious activity that’s often difficult to execute in practice. How much butter is in the four shortbread cookies I had for breakfast this morning?

Inconsistency and lack of continuity in meal recording inevitably leads to poor input quality for predictive models based on the metabolic avatar, reducing their effectiveness.

The Solution

The solution was identified in leveraging new LLM technologies and the possibility of creating an AI agent that assists users in managing their food diary, making daily meal recording quick and simple.

Use Scenarios

  • Working lunch — The patient is at lunch with colleagues, with a plate of two sandwiches and a glass of orange juice in front of them. They pull out their smartphone, photograph the meal, and send it to the assistant. The AI analyzes the image, identifies the sandwich filling, estimates ingredient quantities, and proceeds to compile the food diary, simply asking for confirmation.

  • At the restaurant — The patient has the menu open in front of them. They photograph the menu and ask for advice on what to order. The AI consults the food diary, evaluates nutritional goals, considers the nutritionist’s prescriptions, and suggests possible choices in priority order.

Technical Challenges

Implementing this type of AI agent requires extensive experimentation and testing work. Agent performance strongly depends on design choices in terms of context engineering and the particular LLMs used.

There are also economic factors that pose additional challenges:

  • SotA model costs — State-of-the-art generalist models easily achieve objectives with little effort, but their use is particularly expensive and unsustainable for large-scale applications.

  • Model specialization — Different LLMs excel in different areas: computer vision, abstract reasoning, intent understanding, agentic capabilities. Performance varies significantly between models.

  • Rapid obsolescence — Context engineering refinement is specific to each LLM version. Models have a commercial life of 6-8 months, often forcing optimization to restart from scratch.

  • Open source opportunities — Models like Alibaba’s Qwen2.5-VL match GPT-4o-mini in food recognition, opening interesting self-hosting options at lower costs.

Our Response

Fastal’s solution was creating a software abstraction layer that allows development teams to build AI agents agnostically with respect to the specific provider and LLM used.

Developing this component triggered the creation of our Fastal LangGraph Toolkit, subsequently released as open source.

Solution Architecture

The solution deployed for the Byo-Me project is much more complete than a simple intermediation layer:

  • Model Factory Pattern — The framework abstractly exposes the backbone components on which agents are built
  • Hybrid infrastructure — Support for both R&D and staging/production environments
  • Multi-cloud — Scalable and resilient architecture leveraging AWS and new Neo Clouds with low-cost virtual GPUs for hosting and fine-tuning open source models

Results

The project generated a reusable technological asset that significantly accelerates AI agent development, reducing experimentation time and infrastructure costs while maintaining the flexibility needed to quickly adapt to the evolving LLM landscape.

Technologies Used

Fastal LangGraph Toolkit LangGraph AWS Neo Cloud GPU Qwen2.5-VL GPT-5 Ollama FastAPI

Have a similar project?

Let's talk about how we can help you achieve your goals.

Contact us