{"id":16698,"date":"2020-01-29T23:10:58","date_gmt":"2020-01-30T04:10:58","guid":{"rendered":"http:\/\/bangla.salearningschool.com\/recent-posts\/?p=16698"},"modified":"2020-02-08T09:40:24","modified_gmt":"2020-02-08T14:40:24","slug":"misc-math-data-science-machine-learning-pca-fa","status":"publish","type":"post","link":"http:\/\/bangla.sitestree.com\/?p=16698","title":{"rendered":"Misc Math, Data Science, Machine Learning, PCA, FA"},"content":{"rendered":"<p>&#8220;In <a title=\"Mathematics\" href=\"https:\/\/en.wikipedia.org\/wiki\/Mathematics\">mathematics<\/a>, a set B of elements (vectors) in a <a title=\"Vector space\" href=\"https:\/\/en.wikipedia.org\/wiki\/Vector_space\">vector space<\/a> <em>V<\/em> is called a <strong>basis<\/strong>, if every element of <em>V<\/em> may be written in a unique way as a (finite) <a title=\"Linear combination\" href=\"https:\/\/en.wikipedia.org\/wiki\/Linear_combination\">linear combination<\/a> of elements of B. The coefficients of this linear combination are referred to as <strong>components<\/strong> or <strong>coordinates<\/strong> on B of the vector. The elements of a basis are called <strong>basis vectors<\/strong>.&#8221;<\/p>\n<p>&#8221;<\/p>\n<p>Equivalently B is a basis if its elements are linearly independent and every element of V is a linear combination of elements of B.<a href=\"https:\/\/en.wikipedia.org\/wiki\/Basis_(linear_algebra)#cite_note-1\">[1]<\/a> In more general terms, a basis is a linearly independent <a title=\"Spanning set\" href=\"https:\/\/en.wikipedia.org\/wiki\/Spanning_set\">spanning set<\/a>.<\/p>\n<p>A vector space can have several bases; however all the bases have the same number of elements, called the <a title=\"Dimension (vector space)\" href=\"https:\/\/en.wikipedia.org\/wiki\/Dimension_(vector_space)\">dimension<\/a> of the vector space.<\/p>\n<p>&#8221;<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Basis_(linear_algebra)\">https:\/\/en.wikipedia.org\/wiki\/Basis_(linear_algebra)<\/a><\/p>\n<p><strong>Positive Semidefinite Matrix<\/strong><br \/>\n&#8220;A <strong>positive semidefinite<\/strong> matrix is a Hermitian matrix all of whose eigenvalues are nonnegative. SEE ALSO: Negative Definite Matrix, Negative <strong>Semidefinite<\/strong> Matrix, <strong>Positive<\/strong> Definite Matrix, <strong>Positive<\/strong> Eigenvalued Matrix, <strong>Positive<\/strong> Matrix.&#8221;<\/p>\n<p><a href=\"http:\/\/mathworld.wolfram.com\/PositiveSemidefiniteMatrix.html\">http:\/\/mathworld.wolfram.com\/PositiveSemidefiniteMatrix.html<\/a><\/p>\n<p>&#8221;<\/p>\n<h1>Hermitian Matrix<\/h1>\n<p>A <a href=\"http:\/\/mathworld.wolfram.com\/SquareMatrix.html\">square matrix<\/a> is called Hermitian if it is <a href=\"http:\/\/mathworld.wolfram.com\/Self-AdjointMatrix.html\">self-adjoint<\/a>. Therefore, a Hermitian matrix <img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/mathworld.wolfram.com\/images\/equations\/HermitianMatrix\/Inline1.gif?resize=52%2C20\" alt=\"A=(a_(ij))\" width=\"52\" height=\"20\" \/> is defined as one for which<\/p>\n<table summary=\"\" width=\"100%\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td align=\"left\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/mathworld.wolfram.com\/images\/equations\/HermitianMatrix\/NumberedEquation1.gif?resize=47%2C17\" alt=\"A=A^(H),\" width=\"47\" height=\"17\" \/><\/td>\n<td align=\"right\">(1)<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>where <img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/mathworld.wolfram.com\/images\/equations\/HermitianMatrix\/Inline2.gif?resize=17%2C17\" alt=\"A^(H)\" width=\"17\" height=\"17\" \/> denotes the <a href=\"http:\/\/mathworld.wolfram.com\/ConjugateTranspose.html\">conjugate transpose<\/a>. This is equivalent to the condition<\/p>\n<table summary=\"\" width=\"100%\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td align=\"left\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i0.wp.com\/mathworld.wolfram.com\/images\/equations\/HermitianMatrix\/NumberedEquation2.gif?resize=57%2C18\" alt=\"a_(ij)=a^__(ji),\" width=\"57\" height=\"18\" \/><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&#8221;<\/p>\n<p><a href=\"http:\/\/mathworld.wolfram.com\/HermitianMatrix.html\">http:\/\/mathworld.wolfram.com\/HermitianMatrix.html<\/a><\/p>\n<h1>Definiteness of a matrix<\/h1>\n<p>&#8220;In <a title=\"Linear algebra\" href=\"https:\/\/en.wikipedia.org\/wiki\/Linear_algebra\">linear algebra<\/a>, a <a title=\"Symmetric matrix\" href=\"https:\/\/en.wikipedia.org\/wiki\/Symmetric_matrix\">symmetric<\/a> <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/59d2b4cb72e304526cf5b5887147729ea259da78\" alt=\"n\\times n\" \/> <a title=\"Real number\" href=\"https:\/\/en.wikipedia.org\/wiki\/Real_number\">real<\/a> <a title=\"Matrix (mathematics)\" href=\"https:\/\/en.wikipedia.org\/wiki\/Matrix_(mathematics)\">matrix<\/a> <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/f82cade9898ced02fdd08712e5f0c0151758a0dd\" alt=\"M\" \/> is said to be <strong>positive definite<\/strong> if the scalar <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/b49721caa1a0a8f5dd10a701181e93bf50ed5077\" alt=\"{\\displaystyle z^{\\textsf {T}}Mz}\" \/> is strictly positive for every non-zero column <a title=\"Vector (mathematics)\" href=\"https:\/\/en.wikipedia.org\/wiki\/Vector_(mathematics)\">vector<\/a> <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/bf368e72c009decd9b6686ee84a375632e11de98\" alt=\"z\" \/> of <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/a601995d55609f2d9f5e233e36fbe9ea26011b3b\" alt=\"n\" \/> real numbers. Here <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/21d44d0786dc6ba4a7aa8272444d00e4d15e0460\" alt=\"{\\displaystyle z^{\\textsf {T}}}\" \/> denotes the <a title=\"Transpose\" href=\"https:\/\/en.wikipedia.org\/wiki\/Transpose\">transpose<\/a> of <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/bf368e72c009decd9b6686ee84a375632e11de98\" alt=\"z\" \/>.<a href=\"https:\/\/en.wikipedia.org\/wiki\/Definiteness_of_a_matrix#cite_note-1\">[1]<\/a> When interpreting <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/7cd379c6f5f634b9f6e3b3e042e6e1a1dac79035\" alt=\"{\\displaystyle Mz}\" \/> as the output of an operator, <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/f82cade9898ced02fdd08712e5f0c0151758a0dd\" alt=\"M\" \/>, that is acting on an input, <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/bf368e72c009decd9b6686ee84a375632e11de98\" alt=\"z\" \/>, the property of positive definiteness implies that the output always has a positive <a title=\"Inner product\" href=\"https:\/\/en.wikipedia.org\/wiki\/Inner_product\">inner product<\/a> with the input, as often observed in physical processes.&#8221;<br \/>\n<a href=\"https:\/\/en.wikipedia.org\/wiki\/Definiteness_of_a_matrix\">https:\/\/en.wikipedia.org\/wiki\/Definiteness_of_a_matrix<\/a><\/p>\n<p>&#8221;<\/p>\n<h1>Singular value decomposition<\/h1>\n<p>From Wikipedia, the free encyclopedia<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition#mw-head\">Jump to navigation<\/a><a href=\"https:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition#p-search\">Jump to search<\/a><a href=\"https:\/\/en.wikipedia.org\/wiki\/File:Singular-Value-Decomposition.svg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/thumb\/b\/bb\/Singular-Value-Decomposition.svg\/220px-Singular-Value-Decomposition.svg.png\" alt=\"\" width=\"220\" height=\"199\" \/><\/a><br \/>\nIllustration of the singular value decomposition <strong>U\u03a3V<\/strong>* of a real 2\u00d72 matrix <strong>M<\/strong>.<\/p>\n<ul>\n<li><strong>Top:<\/strong> The action of <strong>M<\/strong>, indicated by its effect on the unit disc <em>D<\/em> and the two canonical unit vectors <em>e<\/em>1 and <em>e<\/em>2.<\/li>\n<li><strong>Left:<\/strong> The action of <strong>V<\/strong>*, a rotation, on <em>D<\/em>, <em>e<\/em>1, and <em>e<\/em>2.<\/li>\n<li><strong>Bottom:<\/strong> The action of <strong>\u03a3<\/strong>, a scaling by the singular values <em>\u03c3<\/em>1 horizontally and <em>\u03c3<\/em>2 vertically.<\/li>\n<li><strong>Right:<\/strong> The action of <strong>U<\/strong>, another rotation.<\/li>\n<\/ul>\n<p>In <a title=\"Linear algebra\" href=\"https:\/\/en.wikipedia.org\/wiki\/Linear_algebra\">linear algebra<\/a>, the <strong>singular value decomposition<\/strong> (<strong>SVD<\/strong>) is a <a title=\"Matrix decomposition\" href=\"https:\/\/en.wikipedia.org\/wiki\/Matrix_decomposition\">factorization<\/a> of a <a title=\"Real number\" href=\"https:\/\/en.wikipedia.org\/wiki\/Real_number\">real<\/a> or <a title=\"Complex number\" href=\"https:\/\/en.wikipedia.org\/wiki\/Complex_number\">complex<\/a> <a title=\"Matrix (mathematics)\" href=\"https:\/\/en.wikipedia.org\/wiki\/Matrix_(mathematics)\">matrix<\/a> that generalizes the <a title=\"Eigendecomposition\" href=\"https:\/\/en.wikipedia.org\/wiki\/Eigendecomposition\">eigendecomposition<\/a> of a square <a title=\"Normal matrix\" href=\"https:\/\/en.wikipedia.org\/wiki\/Normal_matrix\">normal matrix<\/a> to any {\\displaystyle m\\times n}<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/12b23d207d23dd430b93320539abbb0bde84870d\" alt=\"m\\times n\" \/> matrix via an extension of the <a title=\"Polar decomposition\" href=\"https:\/\/en.wikipedia.org\/wiki\/Polar_decomposition#Matrix_polar_decomposition\">polar decomposition<\/a>.<\/p>\n<p>Specifically, the singular value decomposition of an <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/12b23d207d23dd430b93320539abbb0bde84870d\" alt=\"m\\times n\" \/> real or complex matrix <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/e499ae5946af9c09777ada933051b3669d3372c2\" alt=\"\\mathbf {M}\" \/> is a factorization of the form <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/b696ac58329349e430962ce8fa94b50a60ea30a5\" alt=\"{\\displaystyle \\mathbf {U\\Sigma V^{*}} }\" \/>, where <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/b2141bec2344e3dc5241ff50b0fd366755e00223\" alt=\"\\mathbf {U}\" \/> is an <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/367523981d714dcd9214703d654bfdedbe58d44a\" alt=\"m\\times m\" \/> real or complex <a title=\"Unitary matrix\" href=\"https:\/\/en.wikipedia.org\/wiki\/Unitary_matrix\">unitary matrix<\/a>, <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/90f99b56fe6ada781ecd0f8a45b6e787b6dfed56\" alt=\"\\mathbf{\\Sigma}\" \/> is an <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/12b23d207d23dd430b93320539abbb0bde84870d\" alt=\"m\\times n\" \/> <a title=\"Rectangular diagonal matrix\" href=\"https:\/\/en.wikipedia.org\/wiki\/Rectangular_diagonal_matrix\">rectangular diagonal matrix<\/a> with non-negative real numbers on the diagonal, and <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/c0048514530d0c0fb8a7beb795110815a818784d\" alt=\"\\mathbf {V}\" \/> is an <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/59d2b4cb72e304526cf5b5887147729ea259da78\" alt=\"n\\times n\" \/> real or complex unitary matrix. If <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/e499ae5946af9c09777ada933051b3669d3372c2\" alt=\"\\mathbf {M}\" \/> is real, <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/b2141bec2344e3dc5241ff50b0fd366755e00223\" alt=\"\\mathbf {U}\" \/> and <img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/69578cc01a38465bacc08407ae77947174698dab\" alt=\"{\\displaystyle \\mathbf {V} =\\mathbf {V^{*}} }\" \/> are real <a title=\"Orthonormal matrix\" href=\"https:\/\/en.wikipedia.org\/wiki\/Orthonormal_matrix\">orthonormal<\/a> matrices.&#8221;<\/p>\n<p><a href=\"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image-7.png\" rel=\"attachment wp-att-16699\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-16699\" title=\"image-7-png\" src=\"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image-7.png?resize=492%2C575\" alt=\"\" width=\"492\" height=\"575\" \/><\/a><\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition\">https:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition<\/a><\/p>\n<h1>PCA using Python (scikit-learn)<\/h1>\n<p><a href=\"https:\/\/towardsdatascience.com\/pca-using-python-scikit-learn-e653f8989e60\">https:\/\/towardsdatascience.com\/pca-using-python-scikit-learn-e653f8989e60<\/a><\/p>\n<p>Random R code in relation to PCA<\/p>\n<p>#calculate covariance matrix<br \/>\ncov_mat = cov(normalized_mat)<\/p>\n<p>#Calculation of eigen values using built in eigen function<br \/>\n#no need here to do our own eigen<br \/>\neig &lt;- eigen(cov_mat)<\/p>\n<p>#verify with prcomp from R (principal components function)<br \/>\nprcomp(pca_data)<\/p>\n<p>eig$vectors<\/p>\n<p>t(eig$vectors)<\/p>\n<p><strong>Some more information on PCA and FA (Factor Analysis)<\/strong><br \/>\n<a href=\"https:\/\/www.cs.rutgers.edu\/~elgammal\/classes\/cs536\/lectures\/i2ml-chap6.pdf\">https:\/\/www.cs.rutgers.edu\/~elgammal\/classes\/cs536\/lectures\/i2ml-chap6.pdf<\/a><\/p>\n<p><em><strong>*** *** ***<\/strong><\/em><br \/>\n<em><strong>Note: Older short-notes from this site are posted on Medium: <\/strong><\/em><a href=\"https:\/\/medium.com\/@SayedAhmedCanada\">https:\/\/medium.com\/@SayedAhmedCanada<\/a><\/p>\n<p>*** . *** *** . *** . *** . ***<\/p>\n<p><em><strong>Sayed Ahmed<\/strong><br \/>\n<\/em><br \/>\n<em><strong>BSc. Eng. in Comp. Sc. &amp; Eng. (BUET)<\/strong><\/em><br \/>\n<em><strong>MSc. in Comp. Sc. (U of Manitoba, Canada)<\/strong><\/em><br \/>\n<em><strong>MSc. in Data Science and Analytics (Ryerson University, Canada)<\/strong><\/em><br \/>\n<em><strong>Linkedin<\/strong>: <a href=\"https:\/\/ca.linkedin.com\/in\/sayedjustetc\">https:\/\/ca.linkedin.com\/in\/sayedjustetc<\/a><br \/>\n<\/em><\/p>\n<p><em><strong>Blog<\/strong>: <a href=\"http:\/\/bangla.salearningschool.com\/\">http:\/\/Bangla.SaLearningSchool.com<\/a>, <a href=\"http:\/\/sitestree.com\">http:\/\/SitesTree.com<\/a><\/em><br \/>\n<em><strong>Online and Offline Training<\/strong>: <a href=\"http:\/\/training.SitesTree.com\">http:\/\/Training.SitesTree.com<\/a> (Also, can be free and low cost sometimes)<\/em><\/p>\n<p><em>Facebook Group\/Form to discuss (Q &amp; A): <\/em><a href=\"https:\/\/www.facebook.com\/banglasalearningschool\">https:\/\/www.facebook.com\/banglasalearningschool<\/a><\/p>\n<p>Our free or paid training events: <a href=\"https:\/\/www.facebook.com\/justetcsocial\">https:\/\/www.facebook.com\/justetcsocial<\/a><\/p>\n<p><em>Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. <\/em><a href=\"http:\/\/sitestree.com\/training\/\">http:\/\/sitestree.com\/training\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8220;In mathematics, a set B of elements (vectors) in a vector space V is called a basis, if every element of V may be written in a unique way as a (finite) linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates on B of the &hellip; <\/p>\n<p><a class=\"more-link btn\" href=\"http:\/\/bangla.sitestree.com\/?p=16698\">Continue reading<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1910,1908,182],"tags":[],"class_list":["post-16698","post","type-post","status-publish","format-standard","hentry","category-ai-ml-ds-rl-dl-nn-nlp-data-mining-optimization","category-math-and-statistics-for-data-science-and-engineering","category---blog","item-wrap"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[{"id":16701,"url":"http:\/\/bangla.sitestree.com\/?p=16701","url_meta":{"origin":16698,"position":0},"title":"Misc. Math. Data Science. Machine Learning. Optimization. Vector, PCA, Basis, Covariance","author":"Sayed","date":"January 30, 2020","format":false,"excerpt":"Misc. Math. Data Science. Machine Learning. Optimization. Vector, PCA, Basis, Covariance Orthonormality: Orthonormal Vectors \"In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/bangla.sitestree.com\/wp-content\/uploads\/2020\/01\/image-8-e1580436936625.png?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":16682,"url":"http:\/\/bangla.sitestree.com\/?p=16682","url_meta":{"origin":16698,"position":1},"title":"Misc. Math. Might Relate to Optimization","author":"Sayed","date":"January 26, 2020","format":false,"excerpt":"find the equation for a line http:\/\/www.webmath.com\/_answer.php Parametric forms for lines and vectors https:\/\/www.futurelearn.com\/courses\/maths-linear-quadratic-relations\/0\/steps\/12128 Solving Systems of Linear Equations Using Matrices https:\/\/www.mathsisfun.com\/algebra\/systems-linear-equations-matrices.html Affine Space \" \" Subspace https:\/\/www.wolframalpha.com\/input\/?i=subspace \"What is an affine set? A set is called \u201caffine\u201d iff for any two points in the set, the line through them\u2026","rel":"","context":"In &quot;Math and Statistics for Data Science, and Engineering&quot;","block_context":{"text":"Math and Statistics for Data Science, and Engineering","link":"http:\/\/bangla.sitestree.com\/?cat=1908"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image-6.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image-6.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image-6.png?resize=525%2C300 1.5x"},"classes":[]},{"id":16629,"url":"http:\/\/bangla.sitestree.com\/?p=16629","url_meta":{"origin":16698,"position":2},"title":"Misc. Basic Concepts From Linear Algebra","author":"Sayed","date":"January 12, 2020","format":false,"excerpt":"\"Vector Space Description A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars. Scalars are often taken to be real numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers, or generally any field.\u2026","rel":"","context":"In &quot;\u09ac\u09cd\u09b2\u0997 \u0964 Blog&quot;","block_context":{"text":"\u09ac\u09cd\u09b2\u0997 \u0964 Blog","link":"http:\/\/bangla.sitestree.com\/?cat=182"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image.png?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image.png?resize=350%2C200 1x, https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image.png?resize=525%2C300 1.5x, https:\/\/i0.wp.com\/bangla.salearningschool.com\/wp-content\/uploads\/2020\/01\/image.png?resize=700%2C400 2x"},"classes":[]},{"id":16689,"url":"http:\/\/bangla.sitestree.com\/?p=16689","url_meta":{"origin":16698,"position":3},"title":"Misc. Math for Data Science, Engineering, and\/or Optimization","author":"Sayed","date":"January 28, 2020","format":false,"excerpt":"What is the Inverse of a Matrix? https:\/\/www.mathsisfun.com\/algebra\/matrix-inverse.html What is Norm? \"In linear algebra, functional analysis, and related areas of mathematics, a norm is a function that satisfies certain properties pertaining to scalability and additivity, and assigns a strictly positive real number to each vector in a vector space over\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":16641,"url":"http:\/\/bangla.sitestree.com\/?p=16641","url_meta":{"origin":16698,"position":4},"title":"Misc. Optimization. Machine Learning","author":"Sayed","date":"January 14, 2020","format":false,"excerpt":"\"What is machine learning optimization? Optimization is the most essential ingredient in the recipe of machine learning algorithms. It starts with defining some kind of loss function\/cost function and ends with minimizing the it using one or the other optimization routine.Sep 5, 2018\" https:\/\/towardsdatascience.com\/demystifying-optimizations-for-machine-learning-c6c6405d3eea Ordered vector space \"Given a vector\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"A-B","src":"https:\/\/i0.wp.com\/mathworld.wolfram.com\/images\/equations\/VectorOrdering\/Inline1.gif?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":71364,"url":"http:\/\/bangla.sitestree.com\/?p=71364","url_meta":{"origin":16698,"position":5},"title":"Define\/describe Linear vector space in terms of the vectors [a_1, a_2, &#8230;., a_n]","author":"Sayed","date":"October 3, 2021","format":false,"excerpt":"If you can, answer the question below: Write your answer in the comment box. Define\/describe Linear vector space in terms of the vectors [a_1, a_2, ...., a_n]","rel":"","context":"In &quot;Matrix and Signal Processing&quot;","block_context":{"text":"Matrix and Signal Processing","link":"http:\/\/bangla.sitestree.com\/?cat=1944"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16698","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=16698"}],"version-history":[{"count":2,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16698\/revisions"}],"predecessor-version":[{"id":16726,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16698\/revisions\/16726"}],"wp:attachment":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=16698"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=16698"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=16698"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}