{"id":16743,"date":"2020-02-06T12:28:08","date_gmt":"2020-02-06T17:28:08","guid":{"rendered":"http:\/\/bangla.salearningschool.com\/recent-posts\/nlp-ai-and-ml\/"},"modified":"2020-02-08T09:41:29","modified_gmt":"2020-02-08T14:41:29","slug":"nlp-ai-and-ml","status":"publish","type":"post","link":"http:\/\/bangla.sitestree.com\/?p=16743","title":{"rendered":"NLP : AI and ML"},"content":{"rendered":"<p><strong>Feature Selection<br \/>\n<\/strong>&quot;<\/p>\n<h3>Filter Methods<\/h3>\n<h3>Wrapper Methods<\/h3>\n<h3>Embedded Methods<\/h3>\n<p><strong>Feature Selection Checklist<\/strong><\/p>\n<ol>\n<li><strong>Do you have domain knowledge?<\/strong><\/li>\n<li><strong>Are your features commensurate?<\/strong><\/li>\n<li><strong>Do you suspect interdependence of features?<\/strong> If<\/li>\n<\/ol>\n<ul>\n<li><strong>&quot;Weka<\/strong>: \u201c<a title=\"Feature Selection to Improve Accuracy and Decrease Training Time\" href=\"http:\/\/machinelearningmastery.com\/feature-selection-to-improve-accuracy-and-decrease-training-time\/\">Feature Selection to Improve Accuracy and Decrease Training Time<\/a>\u201c.<\/li>\n<li><strong>Scikit-Learn<\/strong>: \u201c<a title=\"Feature Selection in Python with Scikit-Learn\" href=\"http:\/\/machinelearningmastery.com\/feature-selection-in-python-with-scikit-learn\/\">Feature Selection in Python with Scikit-Learn<\/a>\u201c.<\/li>\n<li><strong>R<\/strong>: \u201c<a title=\"Feature Selection with the Caret R Package\" href=\"http:\/\/machinelearningmastery.com\/feature-selection-with-the-caret-r-package\/\">Feature Selection with the Caret R Package<\/a>\u201c&quot; &quot;<\/li>\n<\/ul>\n<p><strong>Reference for the information above<\/strong>: <a href=\"https:\/\/machinelearningmastery.com\/an-introduction-to-feature-selection\/\">https:\/\/machinelearningmastery.com\/an-introduction-to-feature-selection\/<\/a><\/p>\n<p>Feature Selection with Neural Networks<\/p>\n<p><a href=\"http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.54.4570&amp;rep=rep1&amp;type=pdf\">http:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.54.4570&amp;rep=rep1&amp;type=pdf<\/a><\/p>\n<p><strong>Manifold learning<\/strong><\/p>\n<p>&quot;<strong>Manifold learning<\/strong> is an approach to non-linear dimensionality reduction. Algorithms for this task are based on the idea that the dimensionality of many data sets is only artificially high&quot;<br \/>\n&quot;linear dimensionality reduction frameworks have been designed, such as Principal Component Analysis (PCA), Independent Component Analysis, Linear Discriminant Analysis&quot;<br \/>\n&quot;Manifold Learning can be thought of as an attempt to generalize linear frameworks like PCA to be sensitive to non-linear structure in data.&quot;<\/p>\n<p><strong>Manifold learning Approaches:<\/strong><\/p>\n<p>ISomap (Nearest neighbor search.), Shortest-path graph search. Partial eigenvalue decomposition<\/p>\n<p>Locally Linear Embedding<br \/>\n&#8212;Nearest Neighbors Search, Weight Matrix Construction. , Partial Eigenvalue Decomposition<\/p>\n<p>Modified Locally Linear Embedding<\/p>\n<p>Hessian Eigen mapping<\/p>\n<p>Spectral Embedding<\/p>\n<p>Local Tangent Space Alignment<\/p>\n<p>Multi-dimensional Scaling (MDS)<\/p>\n<p>t-distributed Stochastic Neighbor Embedding (t-SNE)<\/p>\n<p>References: <a href=\"https:\/\/scikit-learn.org\/stable\/modules\/manifold.html\">https:\/\/scikit-learn.org\/stable\/modules\/manifold.html<\/a><\/p>\n<p><strong>Nonlinear Principal Component Analysis<\/strong><\/p>\n<p><strong>Might not be that helpful: <\/strong><a href=\"https:\/\/www.image.ucar.edu\/pub\/toyIV\/monahan_5_16.pdf\">https:\/\/www.image.ucar.edu\/pub\/toyIV\/monahan_5_16.pdf<\/a><\/p>\n<p>&quot;<\/p>\n<h3>Nonlinear PCA[<a href=\"https:\/\/en.wikipedia.org\/w\/index.php?title=Nonlinear_dimensionality_reduction&amp;action=edit&amp;section=26\" title=\"Edit section: Nonlinear PCA\" style=\"text-decoration-line:none;color:rgb(11,0,128);background:none\">edit<\/a><span class=\"gmail-mw-editsection-bracket\" style=\"margin-left:0.25em;color:rgb(84,89,93)\">]<\/span><\/h3>\n<p>Nonlinear PCA<a href=\"https:\/\/en.wikipedia.org\/wiki\/Nonlinear_dimensionality_reduction#cite_note-42\">[42]<\/a> (NLPCA) uses <a href=\"https:\/\/en.wikipedia.org\/wiki\/Backpropagation\" title=\"Backpropagation\">backpropagation<\/a> to train a multi-layer perceptron (MLP) to fit to a manifold. Unlike typical MLP training, which only updates the weights, NLPCA updates both the weights and the inputs. That is, both the weights and inputs are treated as latent values. After training, the latent inputs are a low-dimensional representation of the observed vectors, and the MLP maps from that low-dimensional representation to the high-dimensional observation space.&quot;<br \/>\n<a href=\"https:\/\/en.wikipedia.org\/wiki\/Nonlinear_dimensionality_reduction#Nonlinear_PCA\">https:\/\/en.wikipedia.org\/wiki\/Nonlinear_dimensionality_reduction#Nonlinear_PCA<\/a><\/p>\n<p>&quot;Nonlinear principal component analysis (NLPCA) is commonly seen as a nonlinear generalization of standard principal component analysis (PCA). It generalizes the principal components from straight lines to curves (nonlinear). Thus, the subspace in the original data space which is described by all nonlinear components is also curved.<br \/>\nNonlinear PCA can be achieved by using a neural network with an autoassociative architecture also known as autoencoder, replicator network, bottleneck or sandglass type network. Such autoassociative neural network is a multi-layer perceptron that performs an identity mapping, meaning that the output of the network is required to be identical to the input. However, in the middle of the network is a layer that works as a bottleneck in which a reduction of the dimension of the data is enforced. This bottleneck-layer provides the desired component values (scores).&quot;<br \/>\n<a href=\"http:\/\/www.nlpca.org\/\">http:\/\/www.nlpca.org\/<\/a><\/p>\n<h1>Principal Component Analysis<\/h1>\n<p><a href=\"https:\/\/medium.com\/maheshkkumar\/principal-component-analysis-2d11043ff324\">https:\/\/medium.com\/maheshkkumar\/principal-component-analysis-2d11043ff324<\/a><\/p>\n<p>EigenVector and EigenValues<br \/>\n<a href=\"https:\/\/medium.com\/@dareyadewumi650\/understanding-the-role-of-eigenvectors-and-eigenvalues-in-pca-dimensionality-reduction-10186dad0c5c\">https:\/\/medium.com\/@dareyadewumi650\/understanding-the-role-of-eigenvectors-and-eigenvalues-in-pca-dimensionality-reduction-10186dad0c5c<\/a><\/p>\n<h1>&quot;Singular value decomposition<\/h1>\n<p>From Wikipedia, the free encyclopedia<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition#mw-head\">Jump to navigation<\/a><a href=\"https:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition#p-search\">Jump to search<\/a><a href=\"https:\/\/en.wikipedia.org\/wiki\/File:Singular-Value-Decomposition.svg\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/thumb\/b\/bb\/Singular-Value-Decomposition.svg\/220px-Singular-Value-Decomposition.svg.png\" width=\"220\" height=\"199\" \/><\/a><br \/>\nIllustration of the singular value decomposition <strong>U\u03a3V<\/strong>* of a real 2\u00d72 matrix <strong>M<\/strong>.<\/p>\n<ul>\n<li><strong>Top:<\/strong> The action of <strong>M<\/strong>, indicated by its effect on the unit disc <em>D<\/em> and the two canonical unit vectors <em>e<\/em>1 and <em>e<\/em>2.<\/li>\n<li><strong>Left:<\/strong> The action of <strong>V<\/strong>*, a rotation, on <em>D<\/em>, <em>e<\/em>1, and <em>e<\/em>2.<\/li>\n<li><strong>Bottom:<\/strong> The action of <strong>\u03a3<\/strong>, a scaling by the singular values <em>\u03c3<\/em>1 horizontally and <em>\u03c3<\/em>2 vertically.<\/li>\n<li><strong>Right:<\/strong> The action of <strong>U<\/strong>, another rotation.<\/li>\n<\/ul>\n<p>In <a href=\"https:\/\/en.wikipedia.org\/wiki\/Linear_algebra\" title=\"\">linear algebra<\/a>, the <strong>singular value decomposition<\/strong> (<strong>SVD<\/strong>) is a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Matrix_decomposition\" title=\"Matrix decomposition\">factorization<\/a> of a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Real_number\" title=\"Real number\">real<\/a> or <a href=\"https:\/\/en.wikipedia.org\/wiki\/Complex_number\" title=\"Complex number\">complex<\/a> <a href=\"https:\/\/en.wikipedia.org\/wiki\/Matrix_(mathematics)\" title=\"Matrix (mathematics)\">matrix<\/a> that generalizes the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Eigendecomposition\" title=\"Eigendecomposition\">eigendecomposition<\/a> of a square <a href=\"https:\/\/en.wikipedia.org\/wiki\/Normal_matrix\" title=\"Normal matrix\">normal matrix<\/a> to any {\\displaystyle m\\times n}<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/12b23d207d23dd430b93320539abbb0bde84870d\" alt=\"m\\times n\" \/> matrix via an extension of the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Polar_decomposition#Matrix_polar_decomposition\" title=\"Polar decomposition\">polar decomposition<\/a>.<\/p>\n<p>Specifically, the singular value decomposition of an {\\displaystyle m\\times n}<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/12b23d207d23dd430b93320539abbb0bde84870d\" alt=\"m\\times n\" \/> real or complex matrix {\\displaystyle \\mathbf {M} }<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/e499ae5946af9c09777ada933051b3669d3372c2\" alt=\"\\mathbf {M}\" \/> is a factorization of the form {\\displaystyle \\mathbf {U\\Sigma V^{*}} }<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/b696ac58329349e430962ce8fa94b50a60ea30a5\" alt=\"{\\displaystyle \\mathbf {U\\Sigma V^{*}} }\" \/>, where {\\displaystyle \\mathbf {U} }<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/b2141bec2344e3dc5241ff50b0fd366755e00223\" alt=\"\\mathbf {U}\" \/> is an {\\displaystyle m\\times m}<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/367523981d714dcd9214703d654bfdedbe58d44a\" alt=\"m\\times m\" \/> real or complex <a href=\"https:\/\/en.wikipedia.org\/wiki\/Unitary_matrix\" title=\"Unitary matrix\">unitary matrix<\/a>, {\\displaystyle \\mathbf {\\Sigma } }<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/90f99b56fe6ada781ecd0f8a45b6e787b6dfed56\" alt=\"\\mathbf{\\Sigma}\" \/> is an {\\displaystyle m\\times n}<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/12b23d207d23dd430b93320539abbb0bde84870d\" alt=\"m\\times n\" \/> <a href=\"https:\/\/en.wikipedia.org\/wiki\/Rectangular_diagonal_matrix\" title=\"Rectangular diagonal matrix\">rectangular diagonal matrix<\/a> with non-negative real numbers on the diagonal, and {\\displaystyle \\mathbf {V} }<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/c0048514530d0c0fb8a7beb795110815a818784d\" alt=\"\\mathbf {V}\" \/> is an {\\displaystyle n\\times n}<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/59d2b4cb72e304526cf5b5887147729ea259da78\" alt=\"n\\times n\" \/> real or complex unitary matrix. If {\\displaystyle \\mathbf {M} }<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/e499ae5946af9c09777ada933051b3669d3372c2\" alt=\"\\mathbf {M}\" \/> is real, {\\displaystyle \\mathbf {U} }<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/b2141bec2344e3dc5241ff50b0fd366755e00223\" alt=\"\\mathbf {U}\" \/> and {\\displaystyle \\mathbf {V} =\\mathbf {V^{*}} }<img decoding=\"async\" src=\"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/69578cc01a38465bacc08407ae77947174698dab\" alt=\"{\\displaystyle \\mathbf {V} =\\mathbf {V^{*}} }\" \/> are real <a href=\"https:\/\/en.wikipedia.org\/wiki\/Orthonormal_matrix\" title=\"Orthonormal matrix\">orthonormal<\/a> matrices.<\/p>\n<p><em><\/em><br \/>\n<em><\/em><br \/>\n<em><\/em><br \/>\n<em><\/em><br \/>\n<em><\/em><br \/>\n<em><strong>&quot;<\/strong><\/em><br \/>\n<em><strong>Ref: <\/strong><\/em><a href=\"https:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition\">https:\/\/en.wikipedia.org\/wiki\/Singular_value_decomposition<\/a><\/p>\n<p><strong>Graph Neural Networks<\/strong><br \/>\n&quot;<strong>Graph Neural Network<\/strong> is a type of <strong>Neural Network<\/strong> which directly operates on the <strong>Graph<\/strong> structure. A typical application of GNN is node classification. Essentially, every node in the <strong>graph<\/strong> is associated with a label, and we want to predict the label of the nodes without ground-truth .Feb 10, 2019&quot;<br \/>\n<a href=\"https:\/\/towardsdatascience.com\/a-gentle-introduction-to-graph-neural-network-basics-deepwalk-and-graphsage-db5d540d50b3\">https:\/\/towardsdatascience.com\/a-gentle-introduction-to-graph-neural-network-basics-deepwalk-and-graphsage-db5d540d50b3<\/a><\/p>\n<p>&quot;How do Graph neural networks work?<br \/>\n<strong>Graph neural networks<\/strong> (GNNs) <strong>are<\/strong> connectionist models that capture the dependence of <strong>graphs<\/strong> via message passing between the nodes of <strong>graphs<\/strong>. Unlike standard <strong>neural networks<\/strong>, <strong>graph neural networks<\/strong> retain a state that <strong>can<\/strong> represent information from its neighborhood with arbitrary depth.<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/1812.08434\">arxiv.org \u203a pdf<\/p>\n<p><\/a><\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/1812.08434\"><\/a><\/p>\n<h3><a href=\"https:\/\/arxiv.org\/pdf\/1812.08434\">Graph Neural Networks &#8211; arXiv<\/a><\/h3>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/1812.08434\"><\/a>&quot;<\/p>\n<p>Deep Reinforcement Learning meets Graph Neural Networks: An optical network routing use case<\/p>\n<p><a href=\"https:\/\/arxiv.org\/pdf\/1910.07421.pdf\">https:\/\/arxiv.org\/pdf\/1910.07421.pdf<\/a><\/p>\n<p><strong>Bin Counting and Text Analysis<\/strong><\/p>\n<p><em><\/em><br \/>\n<em><\/em><br \/>\n<em><\/em><br \/>\n<em><\/em><br \/>\n<em><\/em><br \/>\n<em><strong>***. ***. *** ***<\/strong><\/em><br \/>\n<em><strong>Note: Older short-notes from this site are posted on Medium: <\/strong><\/em><a href=\"https:\/\/medium.com\/@SayedAhmedCanada\">https:\/\/medium.com\/@SayedAhmedCanada<\/a><\/p>\n<p>*** . *** *** . *** . *** . ***<br \/>\n<em><\/em><br \/>\n<em><strong>Sayed Ahmed<\/strong><br \/>\n<\/em><br \/>\n<em><strong>BSc. Eng. in Comp. Sc. &amp; Eng. (BUET)<\/strong><\/em><br \/>\n<em><strong>MSc. in Comp. Sc. (U of Manitoba, Canada)<\/strong><\/em><br \/>\n<em><strong>MSc. in Data Science and Analytics (Ryerson University, Canada)<\/strong><\/em><br \/>\n<em><strong>Linkedin<\/strong>: <a href=\"https:\/\/ca.linkedin.com\/in\/sayedjustetc\">https:\/\/ca.linkedin.com\/in\/sayedjustetc<\/a><br \/>\n<\/em><\/p>\n<p><em><strong>Blog<\/strong>: <a href=\"http:\/\/bangla.salearningschool.com\/\">http:\/\/Bangla.SaLearningSchool.com<\/a>, <a href=\"http:\/\/sitestree.com\">http:\/\/SitesTree.com<\/a><\/em><br \/>\n<em><strong>Online and Offline Training<\/strong>: <a href=\"http:\/\/training.SitesTree.com\">http:\/\/Training.SitesTree.com<\/a> (Also, can be free and low cost sometimes)<\/em><\/p>\n<p><em>Facebook Group\/Form to discuss (Q &amp; A): <\/em><a href=\"https:\/\/www.facebook.com\/banglasalearningschool\">https:\/\/www.facebook.com\/banglasalearningschool<\/a><\/p>\n<p>Our free or paid training events: <a href=\"https:\/\/www.facebook.com\/justetcsocial\">https:\/\/www.facebook.com\/justetcsocial<\/a><\/p>\n<p><em>Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. <\/em><a href=\"http:\/\/sitestree.com\/training\/\">http:\/\/sitestree.com\/training\/<\/a><\/p>\n<p><em><strong>I<\/strong>f you want to contribute to occasional free and\/or low cost online\/offline training or charitable\/non-profit work in the education\/health\/social service sector, you can financially contribute to: safoundation at <a href=\"http:\/\/salearningschool.com\">salearningschool.com<\/a> using Paypal or Credit Card (on <\/em><a href=\"http:\/\/sitestree.com\/training\/enrol\/index.php?id=114\">http:\/\/sitestree.com\/training\/enrol\/index.php?id=114<\/a> <em>).<\/p>\n<p><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Feature Selection &quot; Filter Methods Wrapper Methods Embedded Methods Feature Selection Checklist Do you have domain knowledge? Are your features commensurate? Do you suspect interdependence of features? If &quot;Weka: \u201cFeature Selection to Improve Accuracy and Decrease Training Time\u201c. Scikit-Learn: \u201cFeature Selection in Python with Scikit-Learn\u201c. R: \u201cFeature Selection with the Caret R Package\u201c&quot; &quot; Reference &hellip; <\/p>\n<p><a class=\"more-link btn\" href=\"http:\/\/bangla.sitestree.com\/?p=16743\">Continue reading<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1910,182],"tags":[],"class_list":["post-16743","post","type-post","status-publish","format-standard","hentry","category-ai-ml-ds-rl-dl-nn-nlp-data-mining-optimization","category---blog","item-wrap"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[{"id":14893,"url":"http:\/\/bangla.sitestree.com\/?p=14893","url_meta":{"origin":16743,"position":0},"title":"AI\/ML\/Data Science: Cross Validation: KFold Cross validation: Concepts, Examples, Projects","author":"Sayed","date":"July 9, 2019","format":false,"excerpt":"Train\/Test Split and Cross Validation in Python https:\/\/towardsdatascience.com\/train-test-split-and-cross-validation-in-python-80b61beca4b6 sklearn.ensemble.RandomForestRegressor\u00b6 https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.ensemble.RandomForestRegressor.html \"A random forest regressor. A random forest is a meta estimator that fits a number of classifying decision trees on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":14828,"url":"http:\/\/bangla.sitestree.com\/?p=14828","url_meta":{"origin":16743,"position":1},"title":"Ridge and Lasso Regression: A Complete Guide with Python Scikit-Learn","author":"Sayed","date":"June 25, 2019","format":false,"excerpt":"","rel":"","context":"In &quot;\u09ac\u09cd\u09b2\u0997 \u0964 Blog&quot;","block_context":{"text":"\u09ac\u09cd\u09b2\u0997 \u0964 Blog","link":"http:\/\/bangla.sitestree.com\/?cat=182"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":14811,"url":"http:\/\/bangla.sitestree.com\/?p=14811","url_meta":{"origin":16743,"position":2},"title":"Next Recession: How to Invest and Profit in the Next Recession","author":"Sayed","date":"June 17, 2019","format":false,"excerpt":"How to Invest and Profit in the Next Recession https:\/\/www.bloomberg.com\/opinion\/articles\/2019-06-17\/how-to-invest-and-profit-in-the-next-recession \" plan on deploying your cash in tranches: Buy a U.S. index fund when markets are down 20 to 25%; add a developed global index fund when markets fall by 30%. And if we are lucky enough to enjoy a\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":14801,"url":"http:\/\/bangla.sitestree.com\/?p=14801","url_meta":{"origin":16743,"position":3},"title":"AI and ML in 2019: Health\/personalized medicine and AI\/ML\/DL\/NLP","author":"Sayed","date":"June 13, 2019","format":false,"excerpt":"Keeping up with AI in 2019. What is the next big thing in AI and ML? https:\/\/medium.com\/thelaunchpad\/what-is-the-next-big-thing-in-ai-and-ml-904a3f3345ef --- A Technical Overview of AI & ML (NLP, Computer Vision, Reinforcement Learning) in 2018 & Trends for 2019 https:\/\/www.analyticsvidhya.com\/blog\/2018\/12\/key-breakthroughs-ai-ml-2018-trends-2019\/ --- AI Research Trends https:\/\/ai100.stanford.edu\/2016-report\/section-i-what-artificial-intelligence\/ai-research-trends --- Artificial intelligence in healthcare: past, present and\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":25027,"url":"http:\/\/bangla.sitestree.com\/?p=25027","url_meta":{"origin":16743,"position":4},"title":"AI and ML in 2019: Health\/personalized medicine and AI\/ML\/DL\/NLP #Root","author":"Author-Check- Article-or-Video","date":"April 14, 2021","format":false,"excerpt":"Keeping up with AI in 2019. What is the next big thing in AI and ML? https:\/\/medium.com\/thelaunchpad\/what-is-the-next-big-thing-in-ai-and-ml-904a3f3345ef --- A Technical Overview of AI & ML (NLP, Computer Vision, Reinforcement Learning) in 2018 & Trends for 2019 https:\/\/www.analyticsvidhya.com\/blog\/2018\/12\/key-breakthroughs-ai-ml-2018-trends-2019\/ --- AI Research Trends https:\/\/ai100.stanford.edu\/2016-report\/section-i-what-artificial-intelligence\/ai-research-trends --- Artificial intelligence in healthcare: past, present and\u2026","rel":"","context":"In &quot;FromSitesTree.com&quot;","block_context":{"text":"FromSitesTree.com","link":"http:\/\/bangla.sitestree.com\/?cat=1917"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":16698,"url":"http:\/\/bangla.sitestree.com\/?p=16698","url_meta":{"origin":16743,"position":5},"title":"Misc Math, Data Science, Machine Learning, PCA, FA","author":"Sayed","date":"January 29, 2020","format":false,"excerpt":"\"In mathematics, a set B of elements (vectors) in a vector space V is called a basis, if every element of V may be written in a unique way as a (finite) linear combination of elements of B. The coefficients of this linear combination are referred to as components or\u2026","rel":"","context":"In &quot;AI ML DS RL DL NN NLP Data Mining Optimization&quot;","block_context":{"text":"AI ML DS RL DL NN NLP Data Mining Optimization","link":"http:\/\/bangla.sitestree.com\/?cat=1910"},"img":{"alt_text":"A=(a_(ij))","src":"https:\/\/i0.wp.com\/mathworld.wolfram.com\/images\/equations\/HermitianMatrix\/Inline1.gif?resize=350%2C200","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16743","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=16743"}],"version-history":[{"count":1,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16743\/revisions"}],"predecessor-version":[{"id":16793,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/16743\/revisions\/16793"}],"wp:attachment":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=16743"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=16743"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=16743"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}