Teaching Videos for Practice-based Learning
Stage I: QCI Data Model
QCI&AI-FML Learning Platform: https://kws.oaselab.org/qciai/
Data Collection Template without QCI&AI Hardware: ( Download )
Data Collection Sample: ( Data Collection Steps 1-4 by NUTN Team )
Descriptions (download)
Step 1: Use the translated English text from the Whisper-Taiwanese Tv0.5 Model to generate an image using an LLM (e.g., Gemini or ChatGPT). Then, conduct a human evaluation to collect HEGAIImage.
Step 2: Upload the generated image to an LLM model (e.g., Gemini or ChatGPT) to generate descriptive text (e.g., no more than 20 words). Conduct a human evaluation to collect HEGAIText.
Step 3: Evaluate Distance and Light based on the generated Image.
Stage II: QCI Inference Model
QCI&AI-FML Learning Platform: https://kws.oaselab.org/qciai/
Download Open CI Model ( Open GAIFit Model )
Download Inference Model ( GAIFit Inference Model )
Download Inference Data ( GAIFit Inference Data)
Stage III: QCI Fine-Tuned Model
QCI&AI-FML Learning Platform: https://kws.oaselab.org/qciai/
Download Expert Data ( GAIFit Expert Data )
Download Training Data ( GAIFit Training Data )
Download Thonny 4.1.7: ( Windows ) | ( Mac )
Download Open Learning Tool Python File ( Python File )