Med AI Insight Viewer
A web application leveraging AI to analyze medical images and provide insightful descriptions.
This application allows users to securely upload medical images (X-rays, MRIs, CT scans, etc.) and receive AI-generated analysis, including potential observations and structured descriptions. It utilizes OpenAI’s vision capabilities via Supabase Edge Functions for analysis, along with Supabase for authentication and data persistence.
Key Features
- Secure User Authentication: Google OAuth login via Supabase Auth ensures user data privacy.
- Easy Image Upload: Simple drag-and-drop or file selection interface for medical images.
- AI-Powered Analysis: Utilizes OpenAI’s advanced vision models (e.g.,
gpt-4-vision-preview
) to interpret images. - Structured Results: Provides analysis in a clear format (e.g., description, potential findings, comments).
- Analysis History: Stores past analyses for user reference, secured by Row Level Security (RLS).
- Responsive UI: Built with Shadcn/ui and Tailwind CSS for a clean experience on desktop and mobile.
Tech Stack
- Frontend:
- Framework: React (Vite)
- Language: TypeScript
- UI Library: Shadcn/ui
- Styling: Tailwind CSS
- Routing: React Router DOM (
react-router-dom
) - State Management: React Context,
useState
, Supabase Auth Helpers - Notifications:
react-hot-toast
(viauseToast
hook),sonner
- Markdown Rendering:
markdown-to-jsx
- Backend:
- Platform: Supabase
- Authentication: Supabase Auth (Google OAuth configured)
- Database: Supabase PostgreSQL
- Serverless Functions: Supabase Edge Functions (Deno Runtime)
- AI:
- Model Provider: OpenAI
- API Interaction: Via Supabase Edge Function
Project Structure
. ├── public/ # Static assets (icons, robots.txt) ├── src/ # Frontend React application source │ ├── components/ # Reusable React components │ │ ├── ui/ # Shadcn UI components │ │ ├── AnalysisResult.tsx # Displays AI analysis results │ │ ├── ApiKeyInput.tsx # (Legacy/Client-side check - Not used for Backend API call) │ │ ├── Header.tsx # Application header with navigation/logout │ │ ├── HistoryList.tsx # Displays list of past analyses │ │ └── ImageUpload.tsx # Handles image selection and preview │ ├── hooks/ # Custom React hooks (use-toast, use-mobile) │ ├── lib/ # Utility functions (cn) │ ├── pages/ # Top-level route components (Index, Dashboard, NotFound) │ ├── types/ # TypeScript type definitions │ ├── utils/ # Utility functions for external services (openai.ts - calls backend) │ ├── App.css # Basic App styles (potentially removable) │ ├── App.tsx # Main application component, routing, Supabase context │ ├── index.css # Tailwind directives and base styles │ ├── main.tsx # Application entry point │ └── vite-env.d.ts # Vite TypeScript env declarations ├── supabase/ # Supabase backend configuration and code │ ├── functions/ # Supabase Edge Functions │ │ ├── _shared/ # Shared code for functions (cors.ts) │ │ └── analyze-image/ # Edge Function for OpenAI image analysis │ │ └── index.ts │ └── migrations/ # Database schema migrations (.sql) ├── .gitignore # Git ignore rules ├── components.json # Shadcn UI configuration ├── eslint.config.js # ESLint configuration ├── index.html # Main HTML entry point for Vite ├── package.json # Project dependencies and scripts ├── postcss.config.js # PostCSS configuration ├── README.md # This file ├── tailwind.config.ts # Tailwind CSS configuration ├── tsconfig.app.json # TypeScript config for the app ├── tsconfig.json # Base TypeScript config ├── tsconfig.node.json # TypeScript config for Node env (Vite config) └── vite.config.ts # Vite build configuration
Core Functionality & Workflow
Authentication (
src/App.tsx
,src/pages/Index.tsx
):- Users land on the
Index
page. - Clicking “Get Started” initiates the Supabase Google OAuth flow.
- Upon successful login, Supabase redirects back to the app (specifically
/dashboard
as configured in the OAuth options). - The
RequireAuth
component inApp.tsx
verifies the Supabase session usinguseSession
. Authenticated users can access/dashboard
; others are redirected to/
. - The
Header
component provides a logout button which callssupabase.auth.signOut()
.
- Users land on the
Image Upload (
src/pages/Dashboard.tsx
,src/components/ImageUpload.tsx
):- On the
Dashboard
, theImageUpload
component allows users to drag & drop or select an image file. - A preview of the selected image is displayed.
- The selected
File
object and a base64 representation (imagePreview
) are stored in theDashboard
component’s state.
- On the
Analysis Process:
- Trigger: The user clicks the “Analyze Image” button on the
Dashboard
. - Frontend (
src/pages/Dashboard.tsx
,src/utils/openai.ts
):- The
analyzeImage
function inDashboard.tsx
is called. - It calls the utility function
analyzeImageApi
fromsrc/utils/openai.ts
. analyzeImageApi
gets the current Supabase session token.- It makes a
POST
request to the Supabase Edge Function endpoint (/functions/v1/analyze-image
). - The request includes the
Authorization: Bearer <token>
header and a JSON body containingimageBase64
andimageType
. - It handles the response from the Edge Function.
- The
- Backend (
supabase/functions/analyze-image/index.ts
):- The Edge Function receives the request.
- It validates the incoming JWT using the Supabase client initialized with the user’s token.
- It retrieves the securely stored OpenAI API key from the Edge Function’s environment variables (
Deno.env.get('OPENAI_API_KEY')
). The client-side key is NOT used here. - It formats the
imageBase64
string into a data URL if necessary. - It constructs a request to the OpenAI API (
gpt-4.1-mini
or similar vision model specified in the function), sending the image URL and a specific prompt asking for medical observations/hypothetical diagnosis. - It receives the analysis text from OpenAI.
- It structures the response into the
AnalysisResultType
format, adding a timestamp. - It saves the
imageType
and the structuredresult
(as JSONB) to theusers_history
table in the Supabase database, linking it to the authenticateduser_id
. - It returns the newly created database record (containing the result) to the frontend.
- Frontend (
src/pages/Dashboard.tsx
):- Receives the analysis result from the utility function.
- Updates the
analysisResult
state variable. - Displays a success toast notification.
- The
AnalysisResult
component re-renders to display the new data.
- Trigger: The user clicks the “Analyze Image” button on the
Result Display (
src/components/AnalysisResult.tsx
):- Renders the
AnalysisResultType
data passed via props. - Displays the image preview alongside the AI-generated content.
- Uses
markdown-to-jsx
to render the analysis content, allowing for formatted text from the AI. - Includes a crucial disclaimer about the analysis not being professional medical advice.
- Renders the
History (
src/pages/Dashboard.tsx
,src/components/HistoryList.tsx
):- The “History” tab on the
Dashboard
renders theHistoryList
component. HistoryList
uses the Supabase JS client (useSupabaseClient
) to fetch records from theusers_history
table, ordered by creation date.- Supabase RLS policies ensure only the currently logged-in user’s history is returned.
- Displays a list of past analyses, showing image type, a snippet of the diagnosis, and timestamp.
- The “History” tab on the
Backend Details
Supabase Edge Function (analyze-image
)
- Purpose: Securely interacts with the OpenAI API using a server-side secret key and stores results.
- Trigger: HTTP POST request to
/functions/v1/analyze-image
. - Authentication: Requires a valid Supabase JWT in the
Authorization
header. - Environment Variables: Requires
SUPABASE_URL
,SUPABASE_ANON_KEY
, andOPENAI_API_KEY
to be set in the Edge Function settings. - Input: JSON
{ imageBase64: string, imageType: string }
. - Processing:
- Authenticates user via JWT.
- Retrieves
OPENAI_API_KEY
secret. - Calls OpenAI Chat Completions API with vision model.
- Parses OpenAI response.
- Inserts result into
users_history
table using the authenticated user’s ID.
- Output: JSON containing the newly created database entry (
{ result: UserHistoryItem }
).
Supabase Database Schema (users_history
)
- Table:
public.users_history
- Purpose: Stores the results of image analyses linked to users.
- Columns:
id
(uuid, PK): Unique identifier for the history entry.user_id
(uuid, FK ->auth.users
): Links the entry to the authenticated user.image_url
(text, nullable): Currently seems unused in the primary analysis flow which uses base64. Could be used if storing uploaded images directly.image_type
(text, not null): Type of the analyzed image (e.g., “X-ray”, “MRI”).result
(jsonb, not null): Stores the structuredAnalysisResultType
object returned by the AI.created_at
(timestamptz, default now()): Timestamp of when the analysis was performed.
- Row Level Security (RLS):
- Enabled: Yes.
- Policies:
- Users can
SELECT
only their own history records (auth.uid() = user_id
). - Users can
INSERT
only records whereuser_id
matches their ownauth.uid()
.
- Users can
Getting Started
Prerequisites
- Node.js (v18 or later recommended)
- npm, yarn, or pnpm
- Git
- Supabase Account
- Supabase CLI (Optional, for local development)
- OpenAI API Key
Installation & Setup
Clone the repository:
git clone <repository-url> cd med-ai-insight-viewer
Install frontend dependencies:
npm install # or yarn install or pnpm install
Set up Environment Variables:
- Create a
.env
file in the root directory. - Add your Supabase Project URL and Anon Key:
VITE_SUPABASE_URL=YOUR_SUPABASE_PROJECT_URL VITE_SUPABASE_ANON_KEY=YOUR_SUPABASE_ANON_KEY
- You can find these in your Supabase project settings (Project Settings > API).
- Create a
Supabase Setup:
- Option A: Supabase Cloud (Recommended for deployment)
- Go to your Supabase project dashboard.
- Authentication: Navigate to Authentication > Providers and enable the “Google” provider. Add your Google Cloud OAuth credentials. Ensure you add your app’s URL(s) (including localhost for development) to the “Redirect URLs” section in Supabase Auth settings and in your Google Cloud OAuth configuration.
- Database: Navigate to the SQL Editor. Copy the contents of
supabase/migrations/20250421000000_initial_schema.sql
and run it to create theusers_history
table and RLS policies. - Edge Functions:
- Navigate to Edge Functions.
- Deploy the
analyze-image
function (e.g., usingsupabase functions deploy analyze-image --no-verify-jwt
if testing locally first, or set up CI/CD). - Go to the
analyze-image
function’s settings > Secrets and add yourOPENAI_API_KEY
.
- Option B: Supabase Local Development
- Initialize Supabase locally:
supabase init
- Start Supabase services:
supabase start
- Apply database migrations:
supabase db push
(or link your projectsupabase link --project-ref <your-project-ref>
and pull schema changes if needed). - Set Edge Function secrets locally:
supabase secrets set OPENAI_API_KEY=YOUR_OPENAI_API_KEY
- (You’ll need to configure Google Auth locally or use email/password for testing if not using the cloud setup). Use the local Supabase URL/keys in your
.env
.
- Initialize Supabase locally:
- Option A: Supabase Cloud (Recommended for deployment)
Run the Frontend:
npm run dev
The application should now be running, typically at
http://localhost:8080
.Deploy Edge Function (if not done in step 4):
# Link to your project if you haven't already # supabase link --project-ref <your-project-ref> # Deploy the function supabase functions deploy analyze-image # IMPORTANT: Set the secret in the Supabase Dashboard (Settings > Edge Functions > analyze-image > Secrets) # Add OPENAI_API_KEY with your actual OpenAI key value.
Configuration
- OpenAI Model: The AI model used for analysis is specified in
supabase/functions/analyze-image/index.ts
(currently hardcoded, likelygpt-4.1-mini
or similar). - Analysis Prompt: The prompt sent to OpenAI is also defined within the
analyze-image
Edge Function. Modify this to change the AI’s behavior or the desired output format. - UI Theme: Colors and styles can be adjusted in
src/index.css
(CSS variables) andtailwind.config.ts
. - Shadcn UI: Components can be added or customized using the Shadcn CLI and
components.json
.
Usage
- Open the application in your browser.
- Log in using your Google account.
- Navigate to the “Analyze Image” tab.
- Upload a medical image using the drag-and-drop area or the file selector.
- Click the “Analyze Image” button.
- Wait for the analysis to complete (a loading indicator will show).
- View the structured results displayed below the upload section.
- Navigate to the “History” tab to view past analyses.
Disclaimer
This application is for informational and demonstration purposes only. The AI-generated analysis is NOT a substitute for professional medical advice, diagnosis, or treatment. Always consult with a qualified healthcare provider regarding any medical conditions or concerns. Do not disregard professional medical advice or delay in seeking it because of something you have read or seen using this application.
Contributing
Contributions are welcome! Please feel free to submit issues or pull requests.
- Fork the repository.
- Create a new branch (
git checkout -b feature/your-feature-name
). - Make your changes.
- Commit your changes (
git commit -m 'Add some feature'
). - Push to the branch (
git push origin feature/your-feature-name
). - Open a Pull Request.
License
(Specify License - e.g., MIT, Apache 2.0. If none, state “All Rights Reserved.”)
Lovable Project
Project Details
- gitkenan/doctair
- Last Updated: 4/23/2025
Recomended MCP Servers
这是一个针对于MySQL开发的MCP,该项目旨在帮助用户快速且精确的查询MySQL数据库中的内容
A type-safe solution to remote MCP communication, enabling effortless integration for centralized management of Model Context.
GitHub Actions Model Context Protocol Server
A really simple MCP server for Jira, which uses docker for easy deployment.
The Opera Omnia MCP server provides programmatic access to the rich collection of JSON datasets from the Opera...
A Model Context Protocol (MCP) server for YouTube videos with caption extraction and markdown conversion capabilities
A macOS AppleScript MCP server
A Python-based MCP for use in exposing Notion functionality to LLMs (Claude)