An Ebook by Jason Brownlee (https://machinelearningmastery.com/ )
(August 2019 reviewed by George Dong)
Initially I liked the layout of the book. A topic/chapter has a goals section and summary at the end. But quickly the temperature gets colder as I started glancing the first few chapters and found they lacked depth, I meant the chapters about the basics of Python, Scipy (Numpy, Pandas and Matplotlib) and loading data, and they seem to target the very beginners but lack necessary content, I thought. Therefore I skipped the entire Part II Lessons and delved into Part III Projects.
This section started with a template for all machine learning projects using the Iris flower project and then worked on one regression problem (Boston House Price) and one classification problem (Sonar Rock and Mines).
The introduction and code are all well laid out and most cases are easily understood. However, there are places I wished for more elaborations, such as the use of pipeline, standardscaler, the choice of models, gridSearchCV, etc. It took me some googling to work them out so that I managed to complete the chapters in this part. I just wished there were more explanation on certain lines of the code and terms as well. A specific problem is that I had to search the meaning of “support” in the classification_report().
However, had I not skipped some chapters, the latter ones in Part II, I would not have had those concerns at all, as I found out afterwards!
After the coding practice, I went through the table of contents, specifically the missed chapters 7-17. Now I could see that the author has practical experience and is giving practical tips for better performance. For example, in chapter 5.5 “On classification problems you need to know how balanced the class values are. Highly imbalanced problems (a lot more observations for one class than another) are common and may need special handling in the data preparation stage of your project. You can quickly get an idea of the distribution of the class attribute in Pandas.” - by using df.groupby().size()
In those missed chapters, a whole set of common problems and issues have been explained in the order of machine learning steps: data preprocessing, modelling, metrics, evaluating models etc. Many of my initial questions could have been answered here!
Therefore, I do believe this is really a good book for people with certain programming experience to get into machine learning. For those who are weak on Python, it is also a good starter as you will familiarise with the common Python features used in machine learning. It must be stated that machine learning here is based on sklearn - scikit-learn, a python library which is not deep machine learning. I am not sure if it involves deep neural networks either.
To use this book to teach, then there need to be more comments on the code - certainly some lines of the code need to be commented either in the code or in the text. However, this book makes a good planning guide so that each section could be reinforced as needed according to students’ needs. Simply put, this book forms the teaching program outline.
Another good feature of this ebook is that there are many hyperlinks to various resources. More could be added for beginner users.
Image Classification, Object Detection and Face Recognition in Python
by Jason Brownlee (https://machinelearningmastery.com/ )
(reviewed in August 2019 by George Dong )
This is truly a good book for a beginner like me. It is good because it started from the basics of computer vision and model building, progressed through image preprocessing, principles of convolutional networks, image classification and all the way to object recognition and face recognition, with examples at every step, with working codes as well. Prior to using this book, I have watched videos, read a couple ebooks (without completing any) and that is all of my knowledge of computer vision. With the completion of this book and most of the accompanying exercises I feel good about this subject and feel like a confident beginner!
This book is the best if you are teaching yourself because it has practical coding exercises for every step along the journey. Starting from loading, resizing to augmentation, from reshaping, batching to filtering, pooling, from a set of examples of image classification via fashion-mnist, dogs and cats, satellite photos, one builds up the skills smoothly. By the time of object recognition and face recognition, I was a little surprised that I was not that confused and could follow the pace.
It is the best book also because it tries to offer the best of the field. I did a bit of web research and found that the models used in the book, such as VGG, Mask-R-CNN, YOLOv3, VGGFace, and FaceNet, are all state of the art designs.
When I have time and/or need I am going to go over it again.
A good purchase of this summer!
Udemy course
to be reviewed
Udemy course
to be reviewed
The Science of Successful Learning- P Brown, et al. 2014 (ISBN 9780674729018)
retrieval practice means self-quizzing - Quizzing and Testing -
Ready available questions
Translate reading into questions and test yourself repeatedly.
Self-testing, make questions, - IF not willing to do so, use ready-made questions from teacher.
The benefits of Designing Questions?
repetition, questioning, retrieval, repetition only works if it is spaced!
Interleaved learning - interlacing - alternating, not in consecutive sequence, but in alternating manner: AAABBBCCC --> ABCABCABC - why is this good for learning- forgetting is good for learning provided you revise and revisit because it helps to learn for long term - long term memory. Repetition is good for long term memory. space out the retrievals.
Struggle helps - go for hard questions - challenge! - effort-ful learning
Elaboration: finding new layers of meaning to the material, image, metaphor, .
Generation: answer questions before being taught.
Reflection: a few minutes of review of what you have learned.
Calibration: check what you learned is correct.
Mnemonic devices: aids for memorising. (Enumerate too.)
Taxonomy:
Muscle Memory -