Hello everyone! Today we will introduce the Advanced Driver Assistance System. This system can assist in driving by monitoring the distance to the vehicle in front. And providing corresponding feedback during the journey. Okay~ Let's start! First, establish your own knowledge model. There are 2 input variable: Distance and Light and 1 output variable: degree of danger then set up the boundary value and rules. You can refer to another more detailed video. Next, execute Python code to obtain the necessary real-world data. In our example, we need to collect distance and brightness data. Distance can be obtained using an ultrasound sensor, while brightness can be obtained using a light sensor. Step2, Organize the collected real-world data and define the desired output. Then, perform PSO (Particle Swarm Optimization) machine learning. Step 3, Based on the trained model, set up the conditional statements in the code. In this step, you can incorporate desired audio files and other functionalities, such as fan rotation. With these additions, you can start the implementation process. Before implementation, we need to establish an MQTT connection. activate our CI Agent, upload the trained knowledge model, and set up MQTT Subscribers and Publishers. Only then can we proceed with bidirectional interaction. Upon initiation, the learning tool will start receiving real-time data, publishing values for Distance and Light. After inference, the inference results will be promptly displayed on the LCD panel. "Wifi connected." When the distance is shorter and the light is dimmer, it indicates a higher level of danger. Conversely, when the distance is greater and the light is brighter, it signifies a safer condition. It's safe. (Taiwanese Language) Other cases fall within the normal range. A little bit safe. (Taiwanese Language) A little bit safe. (Taiwanese Language) It's dangerous. (Taiwanese Language) It's dangerous. (Taiwanese Language) It's dangerous. (Taiwanese Language) It's dangerous. A little bit safe. (Taiwanese Language) A little bit safe. It's safe. (Taiwanese Language) It's safe.
Hello everyone. Today, I'm going to introduce the application of the CI&AI-FML Learning Tool (SD Module Operation). First, we can go through the program. At line 15, I can read the data line by line. Then, by using the WriteFile function, I can store the 'stored_data' above in the test.txt which will be written into the directory of 'sd/output.' Also, we have some changes for the 'stored_data.' In addition, I have another function called 'ReadFile' For example, I can read the content of the test.py located in the 'sd/input' directory. Furthermore, I also have a DeleteFile function. For example, I can delete test.py located in the 'sd/output' directory. Before running the program, I will show how is the content of the SD card right now. First, we can see there is a Test.py under the input directory, which is the one we will read after running the program. Now, please remember the start of test.py which starts from AIoT we will check the result after running the program. And, there is a test.py under the output directory which will be deleted. And, there is a test.txt which 'stored data' will be written into. There is a 'hi' inside but will be updated with 'hi there' after running the program. Now, I will start the program As we can see, the ReadFile has been completed. And, we can see the 'AIoT' which means ReadFile is successful. And, we can also see 'delete test.py' is successful. Then, we have to restart the Learning Tool to see the outcome in the SD card. Now, I have to check whether the stored data: 'hi there' has been written in test.txt. Nice, successful. And we have deleted the 'test.py.' This is the operation of the SD module. Next, I will introduce the camera module program First, we will go through the program to find out where we will get the photo. As we can see, the WriteImage function will save the image as photo.jpg in the output directory. Now, we can start the program. As the LCD shows, we can see the live image from the camera. And, we can press the first button to take a photo. OK. And, if I want to see the photo shot then I have to restart the Learning Tool. photo.jpg is in the 'sd/output' directory. Then, we can have the photo. This is the introduction to the Camera module. Next, I'm going to introduce the ADAS_manual program. First, the program will check whether the WIFI is connected. Next, the program will check the connection of the MQTT protocol. So, when everything is all right, I can start my operation. First, I will open the webpage of the CI Agent and publish data. This time, I published the data to get a 'safe' situation. We can see the Fan starts rotating. And, we can see the message of receiving the 'safe' data in the Terminal of Thonny. Now, I will do it again. And, I publish a 'safe' situation, too. Now, I publish a 'dangerous' situation. And, the learning tool receives the message. The learning tool receives another 'dangerous.' At this time, we can see the words 'Watch out! you are too close to the LCD. Additionally, we can use buttons to interact. The third button will publish a 'safe' situation, while the fourth one is a 'dangerous' situation, and the fifth is the 'medium' situation. Oops! I forget to reconnect the green LED. So, I test the 'medium' situation again. This is the operation of the ADAS_manual program. Next, I will introduce the ADAS_Automatic program. The same as before, I will start the program first checking WiFi, then checking MQTT protocol, and waiting for some preparation time. Now, we can see some reasonable number distance: 55cm Light: 331 Humidity: 74% Temperature: 30C Now, I'm going to cover the Light Sensor Module for changing the lightness. OK, now we can see the lightness has decreased drastically and there are some texts shown on the LCD, 'It is too dark to keep the distance.' to warn the driver. Then, I'm going to test the Ultrasound Module. Now, I'm putting my hand close to the sensor And, we can see the distance shown on the Terminal decrease dramatically from 30cm to 9cm and the warning sign grows larger at the same time. And, there are texts on the LCD to remind dangerousness. Next, we can move to the CI Agent to check whether the two-way communication works well. As we can see, all the distance and lightness measured will be sent to the CI Agent to make Fuzzy Inference. Then, the CI Agent will publish the value of Alert back to the Learning Tool for further reaction. Next, I'm going to test the AI Pin Module Program. This program is used to collect four data including distance, light, humidity, and temperature , and store it as a CSV file. First, we can see we have a StoreFile function to save the data to the 'sd/output' path. We will delete the last file and re-write it with a new one. Now, let's start. The Learning Tool will keep collecting new data. And, I can do something to collect different data, such as approaching the Ultrasound sensor rapidly to collect the smaller values of the distance, covering the Light Sensor to collect the darker values of the light and so on, or slowly getting closer to the Ultrasound sensor to collect steadily decreasing distance. To get the collected data, we have to restart the Learning Tool by pressing its reset button. Next, move to the path, 'sd/output' to see the collected data, data.csv. Now, we can have the data which we just collected.