top of page

Projects

Knowledge Distillation for Physics Informed Neural Networks

kd_algo.png

Our research focuses on compressing Physics-Informed Neural Networks (PINNs) using Knowledge Distillation and Pruning techniques. Knowledge Distillation enhances PINNs by providing in-domain data points, while L1 Pruning achieves substantial sparsity without performance loss. Additionally, our sensitivity analysis provides insights into the resilience and adaptability of different layers within our model.

 

For detailed information, please refer to our report. Click here to access the PDF version.

kd_results.png
Prune.png
sense.png

VQA-LLM: Chain Clarifying

Clarifying_Method.png
Clarifying_example_2.png
Clarifying_example.png

This project combines Vision-Language Models (VLMs) and Large Language Models (LLMs) to improve object relation inference in Visual Question-Answering (VQA). Our experiments on the GQA dataset show that while the LLM enhances understanding of text information, the overall performance is limited by the VQA module and other constraints. Further research is needed to address these challenges.

For detailed information, please refer to our report. Click here to access the PDF version.

VQA-LLM: PICa with logic

PICa:

PICa.png

PICa with logic:

PICa_with_logic.png

PICa with full logic:

PICa_full_logic.png

Results:

results.png

This project explores the integration of logic into in-context few-shot learning using the PICa method.

Machine Learning with CNN Embeddings and Transfer Learning

CNN_Embeddings_Results.png

Our project aimed to create accurate image classification models for the CIFAR-10 dataset. We compared traditional machine learning models with trained CNN feature embeddings and found that using embeddings improved accuracies and F1 scores. Additionally, transfer learning yielded the best results.

For detailed information, please refer to our report. Click here to access the PDF version.

SHAP_Values.png
CNN_Embeddings_TL.png
bottom of page