KL Divergence in Picture and Examples “Kullback–Leibler divergence is the difference between the Cross Entropy H for PQ and the true Entropy H for P.” [1] “And this is what we use as a loss function while training Neural Networks. When we have an image classification problem, the training data and corresponding correct labels represent …
Category: AI ML DS RL DL NN NLP Data Mining Optimization
Feb 06
Misc. : Classifier Performance and Model Selection
Cross Validation: ” en.wikipedia.org Cross–validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross–validation.May 23, …
Feb 06
NLP : AI and ML
Feature Selection " Filter Methods Wrapper Methods Embedded Methods Feature Selection Checklist Do you have domain knowledge? Are your features commensurate? Do you suspect interdependence of features? If "Weka: “Feature Selection to Improve Accuracy and Decrease Training Time“. Scikit-Learn: “Feature Selection in Python with Scikit-Learn“. R: “Feature Selection with the Caret R Package“" " Reference …
Feb 04
Misc. Optimization:
"Linear programming (LP, also called linear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. en.wikipedia.org › wiki › Linear_programming Linear programming – Wikipedia " "Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm …
Feb 02
Misc Basic Statistics for Data Science
Hypergeometric Distribution “In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of successes (random draws for which the object drawn has a specified feature) in draws, without replacement, from a finite population of size that contains exactly objects with that feature, wherein each draw is either a …
Feb 02
The Kalman Filter: Theory : Example: Equations: Applications
The Kalman Filter: An algorithm for making sense of fused sensor insight “The Kalman filter is relatively quick and easy to implement and provides an optimal estimate of the condition for normally distributed noisy sensor values under certain conditions. Mr. Kalman was so convinced of his algorithm that he was able to inspire a friendly …
Feb 02
Why You Don’t Need to Be Bezos to Worry About Spyware
Why You Don’t Need to Be Bezos to Worry About Spyware " 5. Could that happen to me? Yes, but the likelihood of that varies greatly. If you are a lawyer, journalist, activist or politician in possession of sensitive data, or an enemy of a regime that has little regard for human rights, you could …
Jan 30
Misc. Math. Data Science. Machine Learning. Optimization. Vector, PCA, Basis, Covariance
Misc. Math. Data Science. Machine Learning. Optimization. Vector, PCA, Basis, Covariance Orthonormality: Orthonormal Vectors “In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. …
Jan 29
Misc Math, Data Science, Machine Learning, PCA, FA
“In mathematics, a set B of elements (vectors) in a vector space V is called a basis, if every element of V may be written in a unique way as a (finite) linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates on B of the …
Jan 29
‘Scam’ fundraisers reap millions in the name of heart-tugging causes
"The call centers in Alabama, along with others in Nevada, New Jersey, and Florida, raise money on behalf of “scam PACs,” slang among critics for political action committees that purport to support worthy causes but in reality hand over little of the money for political – or charitable – purposes. Instead, the bulk of the …