{"id":16536,"date":"2019-12-29T14:56:24","date_gmt":"2019-12-29T19:56:24","guid":{"rendered":"https:\/\/bangla.salearningschool.com\/recent-posts\/?p=16536"},"modified":"2020-01-30T21:04:24","modified_gmt":"2020-01-31T02:04:24","slug":"part-2-some-basic-math-statistics-concepts-that-data-scientists-the-true-ones-will-usually-know-use","status":"publish","type":"post","link":"http:\/\/bangla.sitestree.com\/?p=16536","title":{"rendered":"Part 2: Some basic Math\/Statistics concepts that Data Scientists (the true ones) will usually know\/use"},"content":{"rendered":"<p>Part 2: Some basic Math\/Statistics concepts that Data Scientists (the true ones) will usually know\/use (came across, studied, learned, used)<\/p>\n<p><strong>Covariance and Correlation<\/strong><\/p>\n<p><strong>&#8220;Covariance<\/strong> is a measure of how two variables change together, but its magnitude is unbounded, so it is difficult to interpret. By dividing <strong>covariance<\/strong> by the product of the two standard deviations, one can calculate the normalized version of the statistic. This is the correlation <strong>coefficient<\/strong>.&#8221; <a href=\"https:\/\/www.investopedia.com\/terms\/c\/correlationcoefficient.asp\">https:\/\/www.investopedia.com\/terms\/c\/correlationcoefficient.asp<\/a> on Investing and Covariance\/Correlation<\/p>\n<p><strong>Covariance and expected value<\/strong><\/p>\n<p><strong>&#8220;<\/strong>Covariance is calculated as expected value or average of the product of the differences of each random variable from their expected values, where E[X] is the expected value for X and E[Y] is the expected value of y.<strong>&#8220;<\/strong><br \/>cov(X, Y) = E[(X &#8211; E[X]) . (Y &#8211; E[Y])]<br \/>cov(X, Y) = sum (x &#8211; E[X]) * (y &#8211; E[Y]) * 1\/n<\/p>\n<p>Sample: covariance: cov(X, Y) = sum (x &#8211; E[X]) * (y &#8211; E[Y]) * 1\/(n &#8211; 1)<br \/>Ref: <a href=\"https:\/\/machinelearningmastery.com\/introduction-to-expected-value-variance-and-covariance\/\">https:\/\/machinelearningmastery.com\/introduction-to-expected-value-variance-and-covariance\/<\/a><\/p>\n<p>Formula for continuous variables<\/p>\n<p><a rel=\"attachment wp-att-16537\"><img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/www.statlect.com\/images\/covariance-formula__11.png?w=750&#038;ssl=1\" alt=\"[eq4]\" \/><\/a><\/p>\n<p>where <img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/www.statlect.com\/images\/covariance-formula__12.png?resize=56%2C25&#038;ssl=1\" alt=\"[eq5]\" width=\"56\" height=\"25\" \/> is the <a href=\"https:\/\/www.statlect.com\/glossary\/joint-probability-density-function\">joint probability density function<\/a> of <img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/www.statlect.com\/s.gif?resize=13%2C24&#038;ssl=1\" alt=\"X\" width=\"13\" height=\"24\" \/> and <img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/www.statlect.com\/s.gif?resize=11%2C24&#038;ssl=1\" alt=\"Y\" width=\"11\" height=\"24\" \/>.<\/p>\n<p>Formula for Discrete Variables<\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/www.statlect.com\/images\/covariance-formula__3.png?w=750&#038;ssl=1\" alt=\"[eq1]\" \/><\/p>\n<p>Reference: <a href=\"https:\/\/www.statlect.com\/glossary\/covariance-formula\">https:\/\/www.statlect.com\/glossary\/covariance-formula<\/a><\/p>\n<p><strong>Correlation:<\/strong><\/p>\n<h3>What is Correlation?<\/h3>\n<p>&#8220;Correlation, in the finance and investment industries, is a statistic that measures the degree to which two securities move in relation to each other. Correlations are used in advanced <a href=\"https:\/\/www.investopedia.com\/terms\/p\/portfoliomanagement.asp\">portfolio management<\/a>, computed as the <a href=\"https:\/\/www.investopedia.com\/terms\/c\/correlationcoefficient.asp\">correlation coefficient<\/a>, which has a value that must fall between -1.0 and +1.0.&#8221; Ref: <a href=\"https:\/\/www.investopedia.com\/terms\/c\/correlation.asp\">https:\/\/www.investopedia.com\/terms\/c\/correlation.asp<\/a><\/p>\n<p><strong>Correlation formula:<\/strong><\/p>\n<p><a href=\"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2019\/12\/image-2.png\" rel=\"attachment wp-att-16539\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-16539\" title=\"image-2-png\" src=\"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2019\/12\/image-2.png?resize=381%2C113\" alt=\"\" width=\"381\" height=\"113\" srcset=\"https:\/\/i0.wp.com\/bangla.sitestree.com\/wp-content\/uploads\/2019\/12\/image-2.png?w=463 463w, https:\/\/i0.wp.com\/bangla.sitestree.com\/wp-content\/uploads\/2019\/12\/image-2.png?resize=300%2C89 300w\" sizes=\"auto, (max-width: 381px) 100vw, 381px\" \/><\/a><\/p>\n<p><strong>Ref:<\/strong> <a href=\"http:\/\/www.stat.yale.edu\/Courses\/1997-98\/101\/correl.htm\">http:\/\/www.stat.yale.edu\/Courses\/1997-98\/101\/correl.htm<\/a><\/p>\n<h2>Correlation in Linear Regression:<\/h2>\n<p>&#8220;The square of the correlation coefficient, <em>r\u00b2<\/em>, is a useful value in <a href=\"http:\/\/www.stat.yale.edu\/Courses\/1997-98\/101\/linreg.htm\">linear regression<\/a>. This value represents the fraction of the variation in one variable that may be explained by the other variable. Thus, if a correlation of 0.8 is observed between two variables (say, height and weight, for example), then a linear regression model attempting to explain <em>either<\/em> variable in terms of the other variable will account for 64% of the variability in the data.&#8221;<\/p>\n<h2><a href=\"http:\/\/www.stat.yale.edu\/Courses\/1997-98\/101\/correl.htm\">http:\/\/www.stat.yale.edu\/Courses\/1997-98\/101\/correl.htm<\/a><\/h2>\n<p><a href=\"http:\/\/sphweb.bumc.bu.edu\/otlt\/MPH-Modules\/BS\/BS704_Multivariable\/BS704_Multivariable5.html\">http:\/\/sphweb.bumc.bu.edu\/otlt\/MPH-Modules\/BS\/BS704_Multivariable\/BS704_Multivariable5.html<\/a><\/p>\n<p>Ref: <a href=\"http:\/\/ci.columbia.edu\/ci\/premba_test\/c0331\/s7\/s7_5.html\">http:\/\/ci.columbia.edu\/ci\/premba_test\/c0331\/s7\/s7_5.html<\/a><\/p>\n<p><strong>Independent Events<\/strong><\/p>\n<p><a href=\"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2019\/12\/image-4.png\" rel=\"attachment wp-att-16541\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-16541\" title=\"image-4-png\" src=\"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2019\/12\/image-4.png?resize=233%2C218\" alt=\"\" width=\"233\" height=\"218\" \/><\/a><\/p>\n<p><a href=\"https:\/\/www.google.com\/search?q=independent+events+and+probability&amp;sxsrf=ACYBGNThE-C7G91S8zWVtZCGG-etwVRsiQ:1577647135710&amp;tbm=isch&amp;source=iu&amp;ictx=1&amp;fir=av6HN7xYQW0R_M%253A%252ChFNdSKQ_Rig6DM%252C_&amp;vet=1&amp;usg=AI4_-kTnzV6odnJfPBIv7lygyrdpUuQw_Q&amp;sa=X&amp;ved=2ahUKEwiPievIydvmAhWlr1kKHQ2LDAsQ9QEwAHoECAIQAw#imgrc=av6HN7xYQW0R_M:\">www.wyzant.com<br \/><\/a><\/p>\n<p>&#8220;In <strong>probability<\/strong>, two <strong>events<\/strong> are <strong>independent<\/strong> if the incidence of one <strong>event<\/strong> does not affect the <strong>probability<\/strong> of the other <strong>event<\/strong>. If the incidence of one <strong>event<\/strong> does affect the <strong>probability<\/strong> of the other <strong>event<\/strong>, then the <strong>events<\/strong> are dependent.&#8221;<\/p>\n<p>Ref: <a href=\"https:\/\/brilliant.org\/wiki\/probability-independent-events\/\">https:\/\/brilliant.org\/wiki\/probability-independent-events\/<\/a><\/p>\n<p><strong>How do you know if an event is independent?<\/strong><\/p>\n<p>&#8220;To test <strong>whether<\/strong> two <strong>events<\/strong> A and B are <strong>independent<\/strong>, calculate P(A), P(B), and P(A \u2229 B), and then check <strong>whether<\/strong> P(A \u2229 B) equals P(A)P(B). <strong>If<\/strong> they are equal, A and B are <strong>independent<\/strong>; <strong>if<\/strong> not, they are dependent. 1. You throw two fair dice, one green and one red, and observe the numbers uppermost.&#8221;<br \/>Ref: <a href=\"https:\/\/www.zweigmedia.com\/RealWorld\/tutorialsf15e\/frames7_5C.html\">https:\/\/www.zweigmedia.com\/RealWorld\/tutorialsf15e\/frames7_5C.html<\/a><br \/>With Examples: <a href=\"https:\/\/www.mathsisfun.com\/data\/probability-events-independent.html\">https:\/\/www.mathsisfun.com\/data\/probability-events-independent.html<\/a><\/p>\n<h2>Joint Distributions and Independence<\/h2>\n<p>The joint PMF of X1X1, X2X2, \u22ef\u22ef, XnXn is defined asPX1,X2,&#8230;,Xn(x1,x2,&#8230;,xn)=P(X1=x1,X2=x2,&#8230;,Xn=xn).<\/p>\n<p>For continuous case:<br \/>P((X1,X2,\u22ef,Xn)\u2208A)=\u222b\u22ef\u222bA\u22ef\u222bfX1X2\u22efXn(x1,x2,\u22ef,xn)dx1dx2\u22efdxn.<br \/><br \/>marginal PDF of XiXi <br \/>fX1(x1)=\u222b\u221e\u2212\u221e\u22ef\u222b\u221e\u2212\u221efX1X2&#8230;Xn(x1,x2,&#8230;,xn)dx2\u22efdxn.<\/p>\n<p><em><strong>Ref: <\/strong><\/em><a href=\"https:\/\/www.probabilitycourse.com\/chapter6\/6_1_1_joint_distributions_independence.php\">https:\/\/www.probabilitycourse.com\/chapter6\/6_1_1_joint_distributions_independence.php<\/a><\/p>\n<p>Random Vectors, Random Matrices, and Their Expected Values<\/p>\n<p><a href=\"http:\/\/www.statpower.net\/Content\/313\/Lecture%20Notes\/MatrixExpectedValue.pdf\">http:\/\/www.statpower.net\/Content\/313\/Lecture%20Notes\/MatrixExpectedValue.pdf<\/a><\/p>\n<p>Random Variables and Probability Distributions: <a href=\"https:\/\/www.stat.pitt.edu\/stoffer\/tsa4\/intro_prob.pdf\">https:\/\/www.stat.pitt.edu\/stoffer\/tsa4\/intro_prob.pdf<\/a><\/p>\n<p><strong>What are moments of a random variable?<\/strong><\/p>\n<p>&#8220;The \u201c<strong>moments<\/strong>\u201d of a <strong>random variable<\/strong> (or of its distribution) are expected values of powers or related functions of the <strong>random variable<\/strong>. The rth <strong>moment<\/strong> of X is E(Xr). In particular, the first <strong>moment<\/strong> is the mean, \u00b5X = E(X). The mean is a measure of the \u201ccenter\u201d or \u201clocation\u201d of a distribution. Ref: <a href=\"http:\/\/homepages.gac.edu\/~holte\/courses\/mcs341\/fall10\/documents\/sect3-3a.pdf\">http:\/\/homepages.gac.edu\/~holte\/courses\/mcs341\/fall10\/documents\/sect3-3a.pdf<\/a>&#8220;<\/p>\n<h1>Characteristic function (probability theory)<\/h1>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Characteristic_function_(probability_theory)#mw-head\">Jump to navigation<\/a><a href=\"https:\/\/en.wikipedia.org\/wiki\/Characteristic_function_(probability_theory)#p-search\">Jump to search<\/a><a href=\"https:\/\/en.wikipedia.org\/wiki\/File:Sinc_simple.svg\"><img loading=\"lazy\" decoding=\"async\" class=\"\" src=\"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/thumb\/e\/ea\/Sinc_simple.svg\/275px-Sinc_simple.svg.png\" alt=\"\" width=\"230\" height=\"146\" \/><\/a>The characteristic function of a uniform <em>U<\/em>(\u20131,1) random variable. This function is real-valued because it corresponds to a random variable that is symmetric around the origin; however characteristic functions may generally be complex-valued.<\/p>\n<p>&#8220;In <a title=\"Probability theory\" href=\"https:\/\/en.wikipedia.org\/wiki\/Probability_theory\">probability theory<\/a> and <a title=\"Statistics\" href=\"https:\/\/en.wikipedia.org\/wiki\/Statistics\">statistics<\/a>, the <strong>characteristic function<\/strong> of any <a title=\"Real-valued\" href=\"https:\/\/en.wikipedia.org\/wiki\/Real-valued\">real-valued<\/a> <a title=\"Random variable\" href=\"https:\/\/en.wikipedia.org\/wiki\/Random_variable\">random variable<\/a> completely defines its <a title=\"Probability distribution\" href=\"https:\/\/en.wikipedia.org\/wiki\/Probability_distribution\">probability distribution<\/a>. If a random variable admits a <a title=\"Probability density function\" href=\"https:\/\/en.wikipedia.org\/wiki\/Probability_density_function\">probability density function<\/a>, then the characteristic function is the <a title=\"Fourier transform\" href=\"https:\/\/en.wikipedia.org\/wiki\/Fourier_transform\">Fourier transform<\/a> of the probability density function.&#8221;<\/p>\n<p><em><strong>Ref: <\/strong><\/em><a href=\"https:\/\/en.wikipedia.org\/wiki\/Characteristic_function_(probability_theory)\">https:\/\/en.wikipedia.org\/wiki\/Characteristic_function_(probability_theory)<\/a><\/p>\n<h1>Functions of random vectors and their distribution<\/h1>\n<p><a href=\"https:\/\/www.statlect.com\/fundamentals-of-probability\/functions-of-random-vectors\">https:\/\/www.statlect.com\/fundamentals-of-probability\/functions-of-random-vectors<\/a><\/p>\n<p><a href=\"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2019\/12\/Screen-Shot-2019-12-29-at-2.53.21-PM.png\" rel=\"attachment wp-att-16542\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-16542\" title=\"screen-shot-2019-12-29-at-2-53-21-pm-png\" src=\"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2019\/12\/Screen-Shot-2019-12-29-at-2.53.21-PM.png?resize=441%2C243\" alt=\"\" width=\"441\" height=\"243\" srcset=\"https:\/\/i0.wp.com\/bangla.sitestree.com\/wp-content\/uploads\/2019\/12\/Screen-Shot-2019-12-29-at-2.53.21-PM.png?w=624 624w, https:\/\/i0.wp.com\/bangla.sitestree.com\/wp-content\/uploads\/2019\/12\/Screen-Shot-2019-12-29-at-2.53.21-PM.png?resize=300%2C165 300w\" sizes=\"auto, (max-width: 441px) 100vw, 441px\" \/><\/a><\/p>\n\n\n<p>\u00a0<\/p>\n<p><em><strong>Ref: <\/strong><\/em><a href=\"https:\/\/books.google.com\/books?id=xz9nQ4wdXG4C&amp;pg=PA42&amp;lpg=PA42&amp;dq=the+characteristic+functions+of+a+vector+random+variable+is+shalom+kiruba&amp;source=bl&amp;ots=VqY-t6i-u2&amp;sig=ACfU3U09k0CHqK_9Lowd8MkLoyjo1ela1Q&amp;hl=en&amp;sa=X&amp;ved=2ahUKEwiaw9mQ0dvmAhXBmeAKHaSgDUgQ6AEwCXoECAkQAQ#v=onepage&amp;q=the%20characteristic%20functions%20of%20a%20vector%20random%20variable%20is%20shalom%20kiruba&amp;f=false\">https:\/\/books.google.com\/books?id=xz9nQ4wdXG4C&amp;pg=PA42&amp;lpg=PA42&amp;dq=the+characteristic+functions+of+a+vector+random+variable+is+shalom+kiruba&amp;source=bl&amp;ots=VqY-t6i-u2&amp;sig=ACfU3U09k0CHqK_9Lowd8MkLoyjo1ela1Q&amp;hl=en&amp;sa=X&amp;ved=2ahUKEwiaw9mQ0dvmAhXBmeAKHaSgDUgQ6AEwCXoECAkQAQ#v=onepage&amp;q=the%20characteristic%20functions%20of%20a%20vector%20random%20variable%20is%20shalom%20kiruba&amp;f=false<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Part 2: Some basic Math\/Statistics concepts that Data Scientists (the true ones) will usually know\/use (came across, studied, learned, used) Covariance and Correlation &#8220;Covariance is a measure of how two variables change together, but its magnitude is unbounded, so it is difficult to interpret. By dividing covariance by the product of the two standard deviations, &hellip; <\/p>\n<p><a class=\"more-link btn\" href=\"http:\/\/bangla.sitestree.com\/?p=16536\">Continue reading<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1908,182],"tags":[],"class_list":["post-16536","post","type-post","status-publish","format-standard","hentry","category-math-and-statistics-for-data-science-and-engineering","category---blog","item-wrap"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[{"id":16701,"url":"http:\/\/bangla.sitestree.com\/?p=16701","url_meta":{"origin":16536,"position":0},"title":"Misc. Math. Data Science. Machine Learning. Optimization. Vector, PCA, Basis, Covariance","author":"Sayed","date":"January 30, 2020","format":false,"excerpt":"Misc. Math. Data Science. Machine Learning. Optimization. Vector, PCA, Basis, Covariance Orthonormality: Orthonormal Vectors \"In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/bangla.sitestree.com\/wp-content\/uploads\/2020\/01\/image-8-e1580436936625.png?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":71246,"url":"http:\/\/bangla.sitestree.com\/?p=71246","url_meta":{"origin":16536,"position":1},"title":"What do the covariance matrices of Stationary signals look?","author":"Sayed","date":"September 27, 2021","format":false,"excerpt":"If you can, answer the question below: Write your answer in the comment box. What do the covariance matrices of Stationary signals look?","rel":"","context":"In &quot;Matrix and Signal Processing&quot;","block_context":{"text":"Matrix and Signal Processing","link":"http:\/\/bangla.sitestree.com\/?cat=1944"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":64265,"url":"http:\/\/bangla.sitestree.com\/?p=64265","url_meta":{"origin":16536,"position":2},"title":"trackFuser, Multi-Object Trackers, Track-Level Fusion of Radar and Lidar Data,  Understanding Sensor Fusion and Tracking, Part 6: What Is Track-Level Fusion?","author":"Sayed","date":"June 9, 2021","format":false,"excerpt":"trackFuser https:\/\/www.mathworks.com\/help\/fusion\/ref\/trackfuser-system-object.html?searchHighlight=covariance%20intersection&s_tid=srchtitle#d123e151992 Multi-Object Trackers https:\/\/www.mathworks.com\/help\/fusion\/multi-object-trackers.html?searchHighlight=covariance%20intersection&s_tid=srchtitle Track-Level Fusion of Radar and Lidar Data https:\/\/www.mathworks.com\/help\/lidar\/ug\/track-level-fusion-of-radar-and-lidar.html?searchHighlight=covariance%20intersection&s_tid=srchtitle Introduction to Track-To-Track Fusion https:\/\/www.mathworks.com\/help\/fusion\/ug\/introduction-to-track-to-track-fusion.html?searchHighlight=covariance%20intersection&s_tid=srchtitle Understanding Sensor Fusion and Tracking, Part 6: What Is Track-Level Fusion? https:\/\/www.mathworks.com\/support\/search.html\/videos\/sensor-fusion-part-6-what-is-track-level-fusion-1598607201282.html?fq[]=asset_type_name:video&fq[]=category:phased\/index&page=1 Misc. Tracking and Fusion https:\/\/www.mathworks.com\/support\/search.html?q=covariance%20intersection&page=1","rel":"","context":"In &quot;\u09ac\u09cd\u09b2\u0997 \u0964 Blog&quot;","block_context":{"text":"\u09ac\u09cd\u09b2\u0997 \u0964 Blog","link":"http:\/\/bangla.sitestree.com\/?cat=182"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":16575,"url":"http:\/\/bangla.sitestree.com\/?p=16575","url_meta":{"origin":16536,"position":3},"title":"Part 4: Some Basic Math\/Stat Concepts for the wanna be Data Scientists","author":"Sayed","date":"December 31, 2019","format":false,"excerpt":"Part 4: Some Basic Math\/Stat Concepts for the wanna be Data Scientists Also for the Engineers in General Quadratic form \"In multivariate statistics, if is a vector of random variables, and is an -dimensional symmetric matrix, then the scalar quantity is known as a quadratic form in . \" Ref:\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"","src":"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/thumb\/3\/3c\/Polynomialdeg4.svg\/233px-Polynomialdeg4.svg.png","width":350,"height":200},"classes":[]},{"id":24967,"url":"http:\/\/bangla.sitestree.com\/?p=24967","url_meta":{"origin":16536,"position":4},"title":"Math and Probability for Data Science #Root","author":"Author-Check- Article-or-Video","date":"April 14, 2021","format":false,"excerpt":"Math and Probability for Data Science Esp. for Reinforcement Learning Probability: cov(x,y): https:\/\/math.stackexchange.com\/questions\/1581773\/computing-the-covariance-of-flipping-a-coin-in-the-first-and-last-flips Fundamental Concept in Machine Learning https:\/\/faculty.washington.edu\/tamre\/BayesTheorem.pdf ---- Lecture Slides - old book https:\/\/www.inf.ed.ac.uk\/teaching\/courses\/rl\/lecturelist.html -- Color : https:\/\/digitalsynopsis.com\/design\/color-thesaurus-correct-names-of-shades\/ Random Sum: http:\/\/www.math.unl.edu\/~sdunbar1\/ProbabilityTheory\/Lessons\/Conditionals\/RandomSums\/randsum.shtml Simple probability examples: http:\/\/jwilson.coe.uga.edu\/emt668\/emat6680.folders\/dickerson\/probability\/titled.html https:\/\/faculty.math.illinois.edu\/~kkirkpat\/SampleSpace.pdf Series and Sum: http:\/\/www.mathcentre.ac.uk\/resources\/uploaded\/mc-ty-convergence-2009-1.pdf Reference: http:\/\/www.sjsu.edu\/faculty\/watkins\/runs.htm https:\/\/mathinsight.org\/image\/two_dice_distribution http:\/\/www.mathguide.com\/lessons2\/ExpectedValues.html http:\/\/www.mathguide.com\/lessons2\/ExpectedValues.html https:\/\/www.texasgateway.org\/resource\/44-geometric-distribution-optional https:\/\/www.me.utexas.edu\/~jensen\/ORMM\/computation\/unit\/rvadd\/discrete_dist\/geometric.html http:\/\/www.mathguide.com\/lessons2\/ExpectedValues.html\u2026","rel":"","context":"In &quot;FromSitesTree.com&quot;","block_context":{"text":"FromSitesTree.com","link":"http:\/\/bangla.sitestree.com\/?cat=1917"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":63958,"url":"http:\/\/bangla.sitestree.com\/?p=63958","url_meta":{"origin":16536,"position":5},"title":"Misc: Optimization: Machine Learning: Data Science Resources","author":"Sayed","date":"June 3, 2021","format":false,"excerpt":"http:\/\/web.mit.edu\/15.053\/www\/AMP-Chapter-09.pdf https:\/\/www.cs.cmu.edu\/~anupamg\/adv-approx\/lecture14.pdf http:\/\/bangla.salearningschool.com\/recent-posts\/misc-optimization\/ https:\/\/www.futurelearn.com\/info\/courses\/maths-linear-quadratic-relations\/0\/steps\/12128 https:\/\/www.mathsisfun.com\/algebra\/systems-linear-equations-matrices.html https:\/\/www.wolframalpha.com\/input\/?i=subspace https:\/\/www.cse.iitk.ac.in\/users\/rmittal\/prev_course\/s14\/notes\/lec3.pdf https:\/\/observablehq.com\/@eliaskal\/point-combinations-linear-conic-affine-convex https:\/\/www.cse.iitk.ac.in\/users\/rmittal\/prev_course\/s14\/course_s14.html http:\/\/bangla.salearningschool.com\/recent-posts\/misc-math-might-relate-to-optimization\/ http:\/\/bangla.salearningschool.com\/recent-posts\/part-x-engineering-optimization-mathematical-optimization\/ https:\/\/www.dr-eriksen.no\/teaching\/GRA6035\/2010\/lecture4.pdf https:\/\/www.mathsisfun.com\/calculus\/concave-up-down-convex.html https:\/\/www-ljk.imag.fr\/membres\/Anatoli.Iouditski\/cours\/convex\/chapitre_3.pdf http:\/\/bangla.salearningschool.com\/recent-posts\/optimization-and-linear-algebra-math-from-the-internet\/ https:\/\/www.thestudentroom.co.uk\/showthread.php?t=1247928 https:\/\/en.wikipedia.org\/wiki\/Hessian_matrix https:\/\/en.wikipedia.org\/wiki\/Newton%27s_method_in_optimization https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/9781118733639.app6 http:\/\/bangla.salearningschool.com\/recent-posts\/optimization-and-linear-algebra-math-from-the-internet\/ https:\/\/en.wikipedia.org\/wiki\/Interior-point_method https:\/\/web.stanford.edu\/~boyd\/papers\/rt_cvx_sig_proc.html Must Read https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/ivantash-optimization_methods_and_their_applications_in_dsp.pdf https:\/\/www.researchgate.net\/publication\/269211254_The_analysis_and_optimization_algorithms_of_the_electronic_circuits_design good one http:\/\/www.bcamath.org\/documentos_public\/courses\/Nogales_2012-13_02_18-22.pdf https:\/\/en.wikipedia.org\/wiki\/Convex_analysis https:\/\/www.khanacademy.org\/search?page_search_query=Optimization%20problems%20(calculus) http:\/\/bangla.salearningschool.com\/recent-posts\/overview-on-optimization-concepts-from-the-internet\/ http:\/\/bangla.salearningschool.com\/recent-posts\/misc-optimization-machine-learning\/ http:\/\/bangla.salearningschool.com\/recent-posts\/misc-math-data-science-machine-learning-optimization-vector-pca-basis-covariance\/ https:\/\/www.mathsisfun.com\/algebra\/matrix-inverse-row-operations-gauss-jordan.html http:\/\/bangla.salearningschool.com\/recent-posts\/misc-math-for-data-science-engineering-and-or-optimization\/","rel":"","context":"In &quot;\u09ac\u09cd\u09b2\u0997 \u0964 Blog&quot;","block_context":{"text":"\u09ac\u09cd\u09b2\u0997 \u0964 Blog","link":"http:\/\/bangla.sitestree.com\/?cat=182"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16536","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=16536"}],"version-history":[{"count":3,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16536\/revisions"}],"predecessor-version":[{"id":16546,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16536\/revisions\/16546"}],"wp:attachment":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=16536"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=16536"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=16536"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}