Non-Optoelectronics
Non-Optoelectronics
Analysis of a Movie Recommendation System Built from Scratch - Machine Learning
The project consists of building a movie recommendation system from scratch and analysis of different parameters like cost function, optimum feature size declaration etc. Figure 2.2.2 shows that, the RMS Loss vs. Iteration curve for varied feature sizes shows that all loss curves start to level out with higher interaction rates. The loss is initially clearly higher because of the bigger feature size, and the gradient is also higher, similar to the mean loss curve. However, this graph differs in that the curves are positioned slightly farther apart. Figure 2.2.3 dipicts that, The RMS Loss vs. Iteration curve for fixed feature size shows that all loss curves begin to flatten as the iteration count rises. Each loss curve starts out with a single value, and as the learning rate varies, the steepness of the curve also changes. At a learning rate of 2e-5, the curves become more accurate. Figure 2.4.2 shows that, our model performs better for a specific value of feature size, according to the RMS Loss(Cost) vs. No of Feature Size plot. Yet, the crucial value of the feather size is also influenced by learning rate. With this model, four characteristics are about the right number. The impact of the regularization coefficient is also depicted in figure 3.4.2, where it causes the ideal feature size to change depending on the value of lambda. Data dependency and model parameter are trade-offs.
Analysis of Logistic Regression Loss vs SVM - Machine Learning
In this project the logistic regression loss is calculated from scratch and the accuracy is compared with the SVM. Figure 2. clearly shows that at very low learning rates, all three data sets produce inaccurate results, however at learning rates of 0.001, all three data sets yield high accuracy that are greater than 95%. The test data started to decline after this rate of learning, even though the validation and training data sets became saturated at 0.001 learning rate. Investigation of this curve results in the conclusion that the learning rate for my suggested model should remain at 0.001. Figure 3.2 picturised the main theme of my previous discussion. Clearly it can be observed that increment of training dataset greatly improves the test data accuracy. And the learning rate of 0.001 suits the best fit for my proposed model with around 96% accuracy.
Designing an ‘7-bit Hamming code encoder and decoder’ using the cadence tool -VLSI
The Hamming code serves the purpose of detecting and correcting single-bit errors that may occur during data transmission. This project proposes a Hamming code encoder and decoder with a configuration of 4 data bits and 3 parity bits. The RTL layout for this project is designed using Verilog Coding, and physical design is also conducted. Various analyses such as Slack Time, Nano Routine, Post Timing, and DRC checks are performed throughout the project. The Cadence software is utilized, utilizing BUET's Server, to implement and develop this project.
Noteable Software Projects
An algorithm utilizing numerical techniques to generate various audio effects on the initial sound
An emotion detection program that relies on the crude curve of facial image processing for analysis
Development of a scientific calculator through the utilization of Verilog coding language and Proteus software
Developing a ventilator controller by regulating variable gain in respiratory systems for improved ventilation
Electrical Service Design for a Three-Storied Building Using AutoCAD-2D Software
Noteable Hardware Projects
Autonomous Solar Tracking System: Optimizing Solar Power Generation
Automatic Brightness Control System for Light Matrices Using Ambient Light Intensity
Self-Navigating Robot Using Google Maps-Based Autonomy
Industrial-Focused Mechanized IC Testing and Characterization
Temperature-Based Automated Fan Speed Controller