Hello everyone~ Today we will introduce how to use the ZAI-FML platform and some basic concept of fuzzy logic. Firstly, search ZAI-FML. We can use a Google account to log in to the ZAI-FML platform. Upload the sample program to the "History Project". And click on "Upload XML". Then click on "Edit". The fuzzy controller name of this example is "Travel Recommendation". This system can be used to determine whether to recommend a tourist attraction. There are 2 input variable and 1 output variable. This application use the Center of Gravity (COG) defuzzification method. There are also other defuzzification methods that you can try when you have time In addition, the accumulation method chosen is "MAX". Finally, a random MQTT topic will be generated during project creation. This will be used when there is a need to connect with other devicet through MQTT later. Next, we move on to set up the knowledge base. Before traveling, we may consider the evaluation and distance of a tourist attraction to decide whether to go. Here, the fuzzy variable "evaluation" is modeled using a trapzoid function. And we need to set the boundary value, param 1 to param 4, for this function. At this point, you may be wondering why there is overlap between the different sections of the function. The reason for the overlapping parts in the membership function is that different people may have different definitions or perceptions of what constitutes "good" or "bad' evaluation. For example, an evaluation of 30 may be considered "bad" by some people, but others may think it's just "normal". This is the essence of fuzzy logic. Similarly, the trapezoid membership functions for the 2 input and 1 output variables are also constructed in the same way. Let's move on to the next step of building the rule base, you can either download a template to build the rules or build them one by one directly on the platform. Because there are 2 input variables, each with 3 linguistic terms, so we will have 9 rules. Let's take the first rule as an example when the evaluation is bad and the distance is near, the system will recommend the tourist attraction. Some student may be puzzled at this point how can the system recommend a tourist attraction with a bad evaluation? That's because for me, the incentive of being close is very strong. Since it's so close, I would go and see what kind of attraction it is, even if it has a bad evaluation because everyone's subjective preferences and opinions are different, the rule base built by each person will also be different. Great! that's how to use the ZAI-FML platform and some basic concept of fuzzy logic. Do you all understand? If you have any question, feel free to leave a message below. See you next time. Bye~
Hello everyone, today we will introduce how to use the ZAI-FML platform for PSO-base model training. Firstly, click on "PSO Learning". You can either download a template to build the training data or build them one by one directly on the platform. If there is not a large amount of actual data available, we can choose to use simulated data for training. When building the knowledge base, we set the value of "evaluation" to be between 0 and 100. We can use the "RANDOMBETWEEN" formula to automatically generate a number within the range. Similarly, we set the value of "distance" to be between 0 and 500. Therefore,we can use the "RANDOMBETWEEN" formula to automatically generate a number within the range. The final influence value need to be generated according to the predefined rule base to produce a reasonable value. Taking the 1st rule as an example, if the evaluation is "bad" and the distance is "near" then the recommendation would be "recommend". By changing the linguistic terms to numerical values, "bad" for evaluation means that our value needs to less than 35, and "near" for distance means that our value need to be less than 75. At this point, a random value between 35 and 65 will be generated, and this process will be repeated for all nine rules to convert them into formulas. The training data for this example consists of 100 records. Then you can click on "import training data". Next, you can set the number of iterations and particles yourself. The more iterations, the longer it takes to complete the training process. Once the settings are configured, you can start the training process. After training, you can check the mean square error(MSE), accuracy and after learning knowledge base. we can see that the ranges of the linguistic terms after learning are closer to the rules we set. Great! That's how to use the ZAI-FML platform for PSO-based model training. Do you all understand? If you have any question, feel free to leave a message below. See you next time. Bye~
Hello everyone. we are team OASE from Taiwan. Welcome to the OASE CI&AI-FML travel recommendation system. I'm Chih-Yu Chen from National University of Tainan, Taiwan. I'm Che-Chia Liang from National Cheng Kung University in Taiwan. I'm Pei-Ying Wu from National University of Tainan, Taiwan. Wow~ Hello I'm AI-FML Robot Welcome to our travel recommendation system. Today I will take you to 3 attractions. Before departure I will help you check your belongings. Remember to prepare your cellphone next remember to prepare your water bottle. Water bottle. This way you can replenish water at any time. Finally, remember to bring your wallet. Great. Everything is ready. Then we can go. Slow is recommended. Resume stop. This is an AR museum where we can change the color and angles of the image as well as adjust the size of the facial features to create our own artwork. Open the headlight. Slow is recommended. Here is the AR coloring book. Put your phone on the picture and press the button. SCAN it transform the 2D image into a 3D image and add color to it. Fast is recommended. Arriving at destination. This is AR band you can use the app to transform the pictures on the paper to 3D model. Create sounds of the instrument on that. Slow down. What a beautiful day. Goodbye~ Thank you for watching and see you soon~
Welcome to use AI-FML Travel Recommendation Application. You can send the infferred result from AI-FML Learning Platform. Taipei 101 is too far. Kebbi does not recommend this travelling spot. If you wish to go to Taipei 101. Please touch on my head. Eye of Gangshan in Kaohsiung is not bad. Kebbi recommends this travelling spot. If you wish to go to Eye of Gangshan. Please touch on my head. Chimei Museum is very close to Tainan. Kebbi very recommends this travelling spot. If you wish to go to Chimei Museum. Please touch on my head. Okay~
Hello everyone! Today we will introduce the Advanced Driver Assistance System. This system can assist in driving by monitoring the distance to the vehicle in front. And providing corresponding feedback during the journey. Okay~ Let's start! First, establish your own knowledge model. There are 2 input variable: Distance and Light and 1 output variable: degree of danger then set up the boundary value and rules. You can refer to another more detailed video. Next, execute Python code to obtain the necessary real-world data. In our example, we need to collect distance and brightness data. Distance can be obtained using an ultrasound sensor, while brightness can be obtained using a light sensor. Step2, Organize the collected real-world data and define the desired output. Then, perform PSO (Particle Swarm Optimization) machine learning. Step 3, Based on the trained model, set up the conditional statements in the code. In this step, you can incorporate desired audio files and other functionalities, such as fan rotation. With these additions, you can start the implementation process. Before implementation, we need to establish an MQTT connection. activate our CI Agent, upload the trained knowledge model, and set up MQTT Subscribers and Publishers. Only then can we proceed with bidirectional interaction. Upon initiation, the learning tool will start receiving real-time data, publishing values for Distance and Light. After inference, the inference results will be promptly displayed on the LCD panel. "Wifi connected." When the distance is shorter and the light is dimmer, it indicates a higher level of danger. Conversely, when the distance is greater and the light is brighter, it signifies a safer condition. It's safe. (Taiwanese Language) Other cases fall within the normal range. A little bit safe. (Taiwanese Language) A little bit safe. (Taiwanese Language) It's dangerous. (Taiwanese Language) It's dangerous. (Taiwanese Language) It's dangerous. (Taiwanese Language) It's dangerous. A little bit safe. (Taiwanese Language) A little bit safe. It's safe. (Taiwanese Language) It's safe.
Hello, my name is Mei-Hui Wang, and I come from the National University of Tainan, Taiwan. Today, I will introduce and demonstrate how to save CI model for a Travel Recommendation System using QCI&AI-FML Learning Platform. So, let's get started. The QCI&AI-FML Learning Platform enables you to save your CI model as a new model. You also can click on "Download CI Model" to your local computer. Additionally, you also can load your CI model to validate the model to meet the IEEE 1855 standard. Additionary, you can directly upload the model to the QCI&AI-FML Learning Platform by clicking on "Create New Model" button. For instance, load the CI model for Smart Green House. Load the model. Choose the model. Click. You can then view the CI knowledge model then, in the Archived CI Models you can see your Smart Green House. Finally, every page features the "Download CI Model" button located at the buttom-right corner for downloading the model. Congratulations on successfully uploading and downloading the CI model using the QCI&AI-FML Learning Platform. See you next time. Bye-Bye.
Hello everyone. Today, I'm going to introduce the application of the CI&AI-FML Learning Tool (SD Module Operation). First, we can go through the program. At line 15, I can read the data line by line. Then, by using the WriteFile function, I can store the 'stored_data' above in the test.txt which will be written into the directory of 'sd/output.' Also, we have some changes for the 'stored_data.' In addition, I have another function called 'ReadFile' For example, I can read the content of the test.py located in the 'sd/input' directory. Furthermore, I also have a DeleteFile function. For example, I can delete test.py located in the 'sd/output' directory. Before running the program, I will show how is the content of the SD card right now. First, we can see there is a Test.py under the input directory, which is the one we will read after running the program. Now, please remember the start of test.py which starts from AIoT we will check the result after running the program. And, there is a test.py under the output directory which will be deleted. And, there is a test.txt which 'stored data' will be written into. There is a 'hi' inside but will be updated with 'hi there' after running the program. Now, I will start the program As we can see, the ReadFile has been completed. And, we can see the 'AIoT' which means ReadFile is successful. And, we can also see 'delete test.py' is successful. Then, we have to restart the Learning Tool to see the outcome in the SD card. Now, I have to check whether the stored data: 'hi there' has been written in test.txt. Nice, successful. And we have deleted the 'test.py.' This is the operation of the SD module. Next, I will introduce the camera module program First, we will go through the program to find out where we will get the photo. As we can see, the WriteImage function will save the image as photo.jpg in the output directory. Now, we can start the program. As the LCD shows, we can see the live image from the camera. And, we can press the first button to take a photo. OK. And, if I want to see the photo shot then I have to restart the Learning Tool. photo.jpg is in the 'sd/output' directory. Then, we can have the photo. This is the introduction to the Camera module. Next, I'm going to introduce the ADAS_manual program. First, the program will check whether the WIFI is connected. Next, the program will check the connection of the MQTT protocol. So, when everything is all right, I can start my operation. First, I will open the webpage of the CI Agent and publish data. This time, I published the data to get a 'safe' situation. We can see the Fan starts rotating. And, we can see the message of receiving the 'safe' data in the Terminal of Thonny. Now, I will do it again. And, I publish a 'safe' situation, too. Now, I publish a 'dangerous' situation. And, the learning tool receives the message. The learning tool receives another 'dangerous.' At this time, we can see the words 'Watch out! you are too close to the LCD. Additionally, we can use buttons to interact. The third button will publish a 'safe' situation, while the fourth one is a 'dangerous' situation, and the fifth is the 'medium' situation. Oops! I forget to reconnect the green LED. So, I test the 'medium' situation again. This is the operation of the ADAS_manual program. Next, I will introduce the ADAS_Automatic program. The same as before, I will start the program first checking WiFi, then checking MQTT protocol, and waiting for some preparation time. Now, we can see some reasonable number distance: 55cm Light: 331 Humidity: 74% Temperature: 30C Now, I'm going to cover the Light Sensor Module for changing the lightness. OK, now we can see the lightness has decreased drastically and there are some texts shown on the LCD, 'It is too dark to keep the distance.' to warn the driver. Then, I'm going to test the Ultrasound Module. Now, I'm putting my hand close to the sensor And, we can see the distance shown on the Terminal decrease dramatically from 30cm to 9cm and the warning sign grows larger at the same time. And, there are texts on the LCD to remind dangerousness. Next, we can move to the CI Agent to check whether the two-way communication works well. As we can see, all the distance and lightness measured will be sent to the CI Agent to make Fuzzy Inference. Then, the CI Agent will publish the value of Alert back to the Learning Tool for further reaction. Next, I'm going to test the AI Pin Module Program. This program is used to collect four data including distance, light, humidity, and temperature , and store it as a CSV file. First, we can see we have a StoreFile function to save the data to the 'sd/output' path. We will delete the last file and re-write it with a new one. Now, let's start. The Learning Tool will keep collecting new data. And, I can do something to collect different data, such as approaching the Ultrasound sensor rapidly to collect the smaller values of the distance, covering the Light Sensor to collect the darker values of the light and so on, or slowly getting closer to the Ultrasound sensor to collect steadily decreasing distance. To get the collected data, we have to restart the Learning Tool by pressing its reset button. Next, move to the path, 'sd/output' to see the collected data, data.csv. Now, we can have the data which we just collected.