DuDONG Grading System v2

JunnielRome 7b3a94cf74 Update README.md to center-align logos and adjust image sizes for better presentation 6 dni temu
assets 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
model_files 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
models 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
resources 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
ui 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
utils 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
workers 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
.gitignore 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
README.md 7b3a94cf74 Update README.md to center-align logos and adjust image sizes for better presentation 6 dni temu
__init__.py 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
main.py 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu
requirements.txt 96898e59fb Initial commit: DuDONG v2 clean repository 6 dni temu

README.md

DuDONG Grading System v2

DuDONG Logo

About DuDONG

Durian Desktop-Oriented Non-Invasive Grading System

DuDONG is a robust desktop application developed by the AIDurian project using Python, designed for advanced assessment of durian ripeness and quality. Utilizing advanced AI models and multiple sensor inputs, the software delivers precise predictions of durian fruit ripeness, quality assessment, and maturity classification.

The application supports both audio analysis and multispectral imaging for comprehensive durian evaluation. Through multi-model analysis including defect detection, shape assessment, and locule counting, DuDONG provides detailed insights into durian quality characteristics. All analysis results are persisted in a comprehensive database for historical tracking and performance monitoring.

Version: 2.1.0

Features

Core Analysis Capabilities

  • Ripeness Classification: Durian ripeness detection using audio analysis and multispectral imaging
  • Quality Assessment: Defect detection and shape analysis for comprehensive quality grading
  • Locule Counting: Automated locule segmentation and counting with visual segmentation
  • Maturity Classification: Multispectral image analysis for maturity determination
  • Shape Classification: Durian shape recognition and assessment

System Features

  • Real-time Processing: GPU acceleration with NVIDIA support (CUDA 12.8+)
  • Multi-Model Analysis: Comprehensive analysis combining multiple AI models
  • Manual Input Mode: Support for multi-source file processing
  • Modern UI: Professional PyQt5 dashboard with real-time status monitoring
  • GPU Acceleration: Optimized for NVIDIA GPUs with automatic CPU fallback
  • Data Persistence: Comprehensive database storage of all analysis results and history tracking
  • Report Generation: Generate detailed reports and export analysis results

Project Structure

dudong-v2/
├── main.py                          # Application entry point
├── requirements.txt                 # Python dependencies
├── README.md                        # This file
├── .gitignore                       # Git ignore rules
│
├── models/                          # AI Model wrappers (Python)
│   ├── base_model.py               # Abstract base class
│   ├── audio_model.py              # Ripeness classification model
│   ├── defect_model.py             # Defect detection model
│   ├── locule_model.py             # Locule counting model
│   ├── maturity_model.py           # Maturity analysis model
│   └── shape_model.py              # Shape classification model
│
├── model_files/                     # Actual ML model files
│   ├── audio/                       # TensorFlow/Keras audio model
│   ├── multispectral/maturity/      # PyTorch maturity model
│   ├── best.pt                      # YOLOv8 defect detection
│   ├── locule.pt                    # YOLOv8 locule segmentation
│   └── shape.pt                     # YOLOv8 shape classification (optional)
│
├── workers/                         # Async Processing Workers
│   ├── base_worker.py              # QRunnable base class
│   ├── audio_worker.py             # Audio processing thread
│   ├── defect_worker.py            # Defect detection thread
│   ├── locule_worker.py            # Locule counting thread
│   ├── maturity_worker.py          # Maturity analysis thread
│   └── shape_worker.py             # Shape classification thread
│
├── ui/                              # User Interface Components
│   ├── main_window.py              # Main application window
│   ├── panels/                      # Dashboard panels
│   ├── tabs/                        # Analysis tabs
│   ├── dialogs/                     # Dialog windows
│   ├── widgets/                     # Custom widgets
│   └── components/                  # Report generation components
│
├── utils/                           # Utility Modules
│   ├── config.py                   # Configuration and constants
│   ├── data_manager.py             # Data persistence
│   ├── db_schema.py                # Database schema
│   ├── camera_automation.py        # Camera control (Windows)
│   ├── system_monitor.py           # System metrics monitoring
│   └── other_utilities...          # Additional utilities
│
├── resources/                       # Styling and Resources
│   └── styles.py                   # Centralized stylesheets
│
└── assets/                          # Image Assets
    ├── logos/                       # Logo images
    └── loading-gif.gif              # Loading animation (optional)

Installation

Prerequisites

  • Python: 3.9 or higher
  • NVIDIA GPU: (Optional) For faster inference with CUDA 12.8
  • Windows 10/11: Required for full camera automation support

Step 1: Clone or Extract Repository

Extract or clone the dudong-v2 repository to your desired location:

git clone <repository-url>
cd dudong-v2

Step 2: Create Virtual Environment

# Create virtual environment
python -m venv venv

# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On Linux/Mac:
source venv/bin/activate

Step 3: Install Dependencies

pip install -r requirements.txt

Step 4: Verify Model Files

Ensure all required model files are present in the model_files/ directory:

  • model_files/audio/best_model_mel_spec_grouped.keras
  • model_files/audio/label_encoder.pkl
  • model_files/audio/preprocessing_stats.json
  • model_files/best.pt (Defect detection)
  • model_files/locule.pt (Locule counting)
  • model_files/multispectral/maturity/final_model.pt (Maturity)
  • model_files/shape.pt (Shape classification - optional)

If any files are missing, the application will still run but those models will not be available.

Usage

Running the Application

python main.py

The application will:

  1. Initialize the PyQt5 application
  2. Load configuration and set up paths
  3. Create the main window
  4. Load AI models in background (with automatic fallback to CPU if needed)
  5. Display the dashboard

Using the Interface

Ripeness Tab

  • Click "Ripeness Classifier" button
  • Select a WAV audio file
  • View spectrogram and ripeness classification with confidence scores

Quality Tab

  • Click "Quality Classifier" button
  • Select an image file (JPG, PNG)
  • View annotated image with defect detection results

Maturity Tab

  • Click "Maturity Analysis" button
  • Select a multispectral TIFF image
  • View maturity classification results

Reports Tab

  • View analysis history and previous results
  • Generate PDF reports
  • Print results
  • Export data

Configuration

Edit utils/config.py to customize:

  • Window dimensions and UI colors
  • Model confidence thresholds
  • Threading and performance settings
  • Audio processing parameters
  • Device selection (CUDA/CPU)

Environment Variables (Optional)

Create a .env file in the root directory:

DEVICE_ID=MAIN-001
LOG_LEVEL=INFO
CUDA_VISIBLE_DEVICES=0

GPU Support

The application automatically detects and uses NVIDIA GPUs when available.

To check GPU availability:

python -c "import torch; print(f'CUDA Available: {torch.cuda.is_available()}'); print(f'GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"None\"}')"

Troubleshooting

Models not loading

  • Verify model files exist in model_files/ directory
  • Check that file paths match exactly (case-sensitive on Linux/Mac)
  • Review application console output for specific error messages

GPU not detected

  • Install NVIDIA drivers (latest version)
  • Verify CUDA 12.8+ is installed
  • Run the GPU check command above
  • Restart application after installing GPU drivers

UI rendering issues

  • Ensure PyQt5 is properly installed: pip install --upgrade PyQt5
  • Check display scaling settings on Windows (may need to disable DPI scaling)
  • Try running with QT_QPA_PLATFORM=windows on Windows

Camera automation not working (Windows)

  • Ensure supported camera software is installed (SecondLook, EOS Utility, AnalyzIR)
  • Check Windows Process Automation (pywinauto) is properly installed
  • Run application with administrator privileges if needed

Development

Code Style

  • Follow PEP 8 guidelines
  • Use type hints for all functions
  • Include docstrings (Google style)
  • Keep modules focused and under 500 lines

Adding New Models

  1. Create model wrapper in models/ directory inheriting from BaseModel
  2. Create corresponding worker in workers/ directory inheriting from BaseWorker
  3. Add UI panel in ui/panels/ for displaying results
  4. Integrate worker connection in ui/main_window.py

Key Dependencies

  • PyQt5: GUI framework
  • PyTorch: YOLO models and GPU acceleration
  • TensorFlow: Audio classification model
  • OpenCV: Image processing
  • Ultralytics: YOLOv8 implementation
  • NumPy/SciPy: Numerical computing
  • Matplotlib: Visualizations
  • ReportLab: PDF generation
  • psutil: System monitoring

Database

The application automatically creates and manages an SQLite database at data/database.db which stores:

  • Analysis metadata and timestamps
  • Input file information
  • Model prediction results
  • Visualization outputs

This database is created automatically on first run.

Development & Support

Development Team

Developed by researchers at the Department of Math, Physics, and Computer Science in UP Mindanao, specifically the AIDurian Project, under the Department of Science and Technology's (DOST) i-CRADLE program.

The project aims to bridge the gap between manual practices of durian farming and introduce it to the various technological advancements available today.

Supported By

  • University of the Philippines Mindanao
  • Department of Science and Technology (DOST)
  • DOST-PCAARRD i-CRADLE Program
UP Mindanao     DOST     DOST-PCAARRD

Industry Partners

Special thanks to AIDurian's partners:

  • Belviz Farms
  • D'Farmers Market
  • EngSeng Food Products
  • Rosario's Delicacies
  • VJT Enterprises
Belviz Farms   D'Farmers Market   EngSeng Food Products   Rosario's Delicacies   VJT Enterprises

Support

For issues, questions, or contributions, please contact the AIDurian development team.

Version

  • Application Version: 2.1.0
  • Repository: dudong-v2
  • Last Updated: February 2026

License

© 2023-2026 AIDurian Project. All rights reserved.