Home >IEEE CIS HSO Events >QCI Workshop @ Malaysia (11/13-11/18/2025) >Experience-based & Operation-based Learning> Samples for GAIFit Application
> Sample Models and Data (GAIFit) without QCI&AI Hardware
Teaching Videos for Practice-based Learning
Whisper Taiwanese Tv0.5 Taiwanese sentence examples
Stage I: QCI Data Model
QCI&AI-FML Learning Platform: https://kws.oaselab.org/qciai/
QCI Agent & GAI Image: https://kws.oaselab.org/kws-ai/
Data Collection Template without QCI&AI Hardware: ( Download )
Data Collection Sample: ( Data Collection Steps 1-4 by NUTN Team )
Four Taiwanese Language Audio Samples : ( Link )
Seven Taiwanese Language Audio Samples : ( Link )
Whisper-Taiwanese v0.5 (Tv0.5) for Taiwanese / English Co-Learning (Demo) Website : ( Link )
Descriptions (download)
Step 1:Download the sample code (.ipynb) from here
Step 2:Sign in to your Google account
Step 3:Connect to Colab
Step 4:Click the “File”menu, then select “Upload notebook” to upload the sample code (.ipynb).
Step 5:Click the “Runtime”menu, then select “Change runtime type.”Set it to Python 3 and T4 GPU.
Step 5-1:Download Whisper-Taiwanese Tv0.5 from Hugging Face
Step 5-2:Refer to this website to learn how to speak Taiwanese.
Step 5-3:Speak Taiwanese and record the audio (.wav)
Step 5-4:Run ASR (Automatic Speech Recognition) to convert your Taiwanese audio into Chinese text
Step 5-5:Download the Chinese-to-English translation model
Step 5-6:Translate the Chinese text into English
Step 6:Use the English text translated by the Whisper-Taiwanese Tv0.5 Model, and generate images through a large language model (LLM) (e.g., QCI Agent & GAI Image) . Then perform manual evaluation to collect HEGAIImage.
Step 7: Upload the generated images to the large language model (LLM) (e.g., QCI Agent & GAI Image) to produce descriptive text (e.g., no more than 20 words). Then perform manual evaluation to collect HEGAIText.
Step 8:Evaluate the distance and light based on the generated images.
Step 9:Organize the collected Distance, Light, HEGAIText, and HEGAIImage into the Data Collection Template.
Step 10:Merge the values from the Data Collection Template into the Data Collection Sample to complete the data model construction.
Descriptions (download)
Step 1: Use the translated English text from the Whisper-Taiwanese Tv0.5 Model to generate an image using an LLM (e.g., QCI Agent & GAI Image). Then, conduct a human evaluation to collect HEGAIImage.
Step 2: Upload the generated image to an LLM model (e.g., QCI Agent & GAI Image) to generate descriptive text (e.g., no more than 20 words). Conduct a human evaluation to collect HEGAIText.
Step 3: Evaluate Distance and Light based on the generated Image.
Stage II: QCI Knowledge Model
QCI&AI-FML Learning Platform: https://kws.oaselab.org/qciai/
Download Open CI Model ( Open GAIFit Model )
Stage III: QCI Inference Model
Download Inference Model ( GAIFit Inference Model )
Download Inference Data ( GAIFit Inference Data)
Stage IV: QCI Fine-Tuned Model
QCI&AI-FML Learning Platform: https://kws.oaselab.org/qciai/
Download Expert Data ( GAIFit Expert Data )
Download Training Data ( GAIFit Training Data )
Download Thonny and Python File
Download Thonny 4.1.7: ( Windows ) | ( Mac )
Download Open Learning Tool Python File ( Python File )
const.py: Set the (1) Wi-Fi name and password, and (2) the MQTT Server's username, password, and topic name.
QCIGAIModel_DataCollection.py: Execute this code to start collecting the GAI data, including distance and light, from the learning tool.
MQTTGAIManual(two-way).py: Execute this code to (1) receive data from the QCI&AI-FML learning platform and make a response to the received data, also (2) press left-hand bottom to send data manually back to QCI&AI-FML learning platform.
Home >IEEE CIS HSO Events >QCI Workshop @ Malaysia (11/13-11/18/2025) >Experience-based & Operation-based Learning> Samples for GAIFit Application
> Sample Models and Data (GAIFit) without QCI&AI Hardware