Skip to content

AliRadwan1/Autonomous-Vehicle-Perception-Module

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚗 Autonomous Vehicle Perception Module

Course: Supervised Learning - Spring 2026 Institution: Cairo University - Faculty of Computers and Artificial Intelligence Task: Object detection and road scene classification for self-driving cars

This repository contains the code, experiments, and reports for a comprehensive machine learning pipeline designed for autonomous vehicle perception. The project progresses from classical machine learning baselines to deep learning models, and finally to advanced sequence modeling and explainable AI (XAI) techniques.


📑 Table of Contents

  1. Dataset Overview
  2. Project Roadmap
  3. Repository Structure
  4. Setup & Installation
  5. How to Run
  6. Team Members
  7. Acknowledgments

📊 Dataset Overview

This project utilizes the following approved datasets to fulfill the phase requirements:

  • German Traffic Sign Recognition Benchmark (GTSRB) & CIFAR-10: Used primarily in Phases 1 and 2 for image-based traffic sign classification and feature extraction.
  • BDD100K Subset: Used in Phase 3 for sequential video frame processing and evaluating in-car camera data under various driving conditions.

(Note: Raw data should be placed in the data/raw/ directory. See data/datasets_info.md for specific download links and instructions).


🗺️ Project Roadmap

Phase 1: Data & Classical ML

  • EDA: Analysis of class balance and pixel statistics for the traffic sign datasets.
  • Baselines: Feature extraction paired with KNN and Naïve Bayes classifiers.
  • Dimensionality Reduction: PCA implementation to measure accuracy trade-offs.
  • Ensembles: Implementation of Bagging and Boosting classifiers.
  • Tuning: Grid search for hyperparameter optimization.

Phase 2: Deep Learning (CNN/AE/Transfer)

  • CNN Architecture: Deep Convolutional Neural Network built from scratch for traffic sign recognition.
  • Anomaly Detection: AutoEncoder trained for noisy image reconstruction.
  • Transfer Learning: Fine-tuning MobileNet/VGG architectures for in-car camera data.
  • Optimization: Comparison of optimizers and learning rate schedulers, along with CNN feature map visualizations.

Phase 3: Advanced Techniques (Transformers/GANs/XAI)

  • Sequence Modeling: LSTM network integrated for processing video frame sequences.
  • Attention: Implementation of Attention mechanisms to focus on relevant scene regions.
  • Generative AI: DCGAN used to generate synthetic traffic scenarios.
  • Explainability (XAI): GradCAM and SHAP applied to explain model predictions.
  • Fairness: Documentation of model fairness across different weather and lighting conditions.

📂 Repository Structure

autonomous-vehicle-perception/
├── data/
│   ├── raw/                    # Original datasets (GTSRB, BDD100K)
│   ├── processed/              # Cleaned/preprocessed data
│   └── datasets_info.md
├── src/
│   ├── phase1_classical_ml/    # Phase 1: Classical ML scripts
│   ├── phase2_deep_learning/   # Phase 2: Deep Learning scripts
│   ├── phase3_advanced/        # Phase 3: Advanced Techniques scripts
│   └── utils/                  # Helper functions (loaders, metrics)
├── notebooks/
│   ├── 01_eda_traffic_signs.ipynb
│   ├── 02_phase1_classical_ml.ipynb
│   ├── 03_phase2_deep_learning.ipynb
│   └── 04_phase3_advanced.ipynb
├── models/                     # Saved model weights (.h5, .pt, etc.)
├── results/                    # Outputs, plots, reports (PDFs)
├── tests/                      # Unit tests
├── requirements.txt
├── README.md
├── .gitignore
└── config.yaml


⚙️ Setup & Installation

  1. Clone the repository:
git clone https://github.com/AliRadwan1/Autonomous-Vehicle-Perception-Module.git
cd autonomous-vehicle-perception
  1. Create a virtual environment (Optional but recommended):
python -m venv venv
source venv/bin/activate  # On Windows use `venv\Scripts\activate`
  1. Install dependencies:
pip install -r requirements.txt

🚀 How to Run

The project is designed to be executed via Jupyter Notebooks for grading and demonstration purposes.

  1. Ensure your data is downloaded and placed in data/raw/.
  2. Start the Jupyter server:
jupyter notebook
  1. Run the notebooks in the notebooks/ directory in sequential order (01 through 04) to reproduce the pipeline from EDA to Advanced XAI. Heavy processing logic is imported from the src/ directory.

👥 Team Members

Name GitHub Profile
Ali Radwan @AliRadwan1
Seif Eldeen Amr @seifah1234
Nouran Essam @Nouranessam116
Zyad Atef @Zyadateff
Mawada Emad @mawadaemad

🙏 Acknowledgments

We would like to thank our course instructors for their guidance throughout this project:

  • Dr. Ghada Dahy

  • Eng. Hamza EmadEIDin

  • Eng. Abdelrahman Sayed

  • Eng. Sherif Magdy

About

a comprehensive computer vision system for autonomous vehicle perception, progressing from classical machine learning through deep learning to advanced AI techniques for traffic sign recognition and road scene understanding.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors