{"id":24915,"date":"2021-04-13T23:10:04","date_gmt":"2021-04-14T03:10:04","guid":{"rendered":"http:\/\/bangla.salearningschool.com\/recent-posts\/implement-gradient-descend-root\/"},"modified":"2021-04-13T23:10:04","modified_gmt":"2021-04-14T03:10:04","slug":"implement-gradient-descend-root","status":"publish","type":"post","link":"http:\/\/bangla.sitestree.com\/?p=24915","title":{"rendered":"Implement Gradient Descend: #Root"},"content":{"rendered":"<p>&quot;<\/p>\n<h2><strong>Gradient descent<\/strong> is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using<strong>gradient descent<\/strong>, one takes steps proportional to the negative of the <strong>gradient<\/strong> (or approximate<strong>gradient<\/strong>) of the function at the current point.<\/p>\n<p>Gradient descent &#8211; Wikipedia<\/p>\n<\/h2>\n<h2><a href=\"https:\/\/en.wikipedia.org\/wiki\/Gradient_descent\"><br \/>\nhttps:\/\/en.wikipedia.org\/wiki\/Gradient_descent<br \/>\n<\/a><\/p>\n<p>&quot; <\/h2>\n<h2>Gradient Descend<\/h2>\n<p dir=\"ltr\"># From calculation, it is expected that the local minimum occurs at x=9\/4<\/p>\n<p dir=\"ltr\">&quot;&quot;&quot;<\/p>\n<p dir=\"ltr\">cur_x = 6 # The algorithm starts at x=6<\/p>\n<p dir=\"ltr\">gamma = 0.01 # step size multiplier<\/p>\n<p dir=\"ltr\">precision = 0.00001<\/p>\n<p dir=\"ltr\">previous_step_size = 1<\/p>\n<p dir=\"ltr\">max_iters = 10000 # maximum number of iterations<\/p>\n<p dir=\"ltr\">iters = 0 #iteration counter<\/p>\n<p dir=\"ltr\">df = lambda x: 4 * x**3 &#8211; 9 * x**2<\/p>\n<p dir=\"ltr\">while previous_step_size &gt; precision and iters &lt; max_iters:<\/p>\n<p dir=\"ltr\"> prev_x = cur_x<\/p>\n<p dir=\"ltr\"> cur_x -= gamma * df(prev_x)<\/p>\n<p dir=\"ltr\"> previous_step_size = abs(cur_x &#8211; prev_x)<\/p>\n<p dir=\"ltr\"> iters+=1<\/p>\n<p dir=\"ltr\">print(&quot;The local minimum occurs at&quot;, cur_x)<\/p>\n<p dir=\"ltr\">#The output for the above will be: (&#8216;The local minimum occurs at&#8217;, 2.2499646074278457)<\/p>\n<p dir=\"ltr\">&quot;&quot;&quot;<\/p>\n<p dir=\"ltr\">#&#8212;-<\/p>\n<p dir=\"ltr\">print(&#8216;my part&#8217;)<\/p>\n<p dir=\"ltr\">co_ef = 6<\/p>\n<p dir=\"ltr\">iter = 0<\/p>\n<p dir=\"ltr\">max_iter = 1000<\/p>\n<p dir=\"ltr\">gamma = 0.001<\/p>\n<p dir=\"ltr\">step = 1<\/p>\n<p dir=\"ltr\">precision = 0.0000001<\/p>\n<p dir=\"ltr\">df = lambda x: 4 * x * x * x &#8211; 9 * x * x<\/p>\n<p dir=\"ltr\">while (iter &lt;= max_iter) or (step &gt;= precision ) :<\/p>\n<p dir=\"ltr\"> prev_co_ef = co_ef<\/p>\n<p dir=\"ltr\"> co_ef -= gamma * df (prev_co_ef)<\/p>\n<p dir=\"ltr\"> step = abs (prev_co_ef &#8211; co_ef)<\/p>\n<p dir=\"ltr\">print(co_ef)<\/p>\n<p> From: https:\/\/sitestree.com\/implement-gradient-descend\/<br \/> Categories:Root<br \/>Tags:<br \/> Post Data:2019-01-05 15:17:29<\/p>\n<p>\t\tShop Online: <a href='https:\/\/www.ShopForSoul.com\/' target='new' rel=\"noopener\">https:\/\/www.ShopForSoul.com\/<\/a><br \/>\n\t\t(Big Data, Cloud, Security, Machine Learning): Courses: <a href='http:\/\/Training.SitesTree.com' target='new' rel=\"noopener\"> http:\/\/Training.SitesTree.com<\/a><br \/>\n\t\tIn Bengali: <a href='http:\/\/Bangla.SaLearningSchool.com' target='new' rel=\"noopener\">http:\/\/Bangla.SaLearningSchool.com<\/a><br \/>\n\t\t<a href='http:\/\/SitesTree.com' target='new' rel=\"noopener\">http:\/\/SitesTree.com<\/a><br \/>\n\t\t8112223 Canada Inc.\/JustEtc: <a href='http:\/\/JustEtc.net' target='new' rel=\"noopener\">http:\/\/JustEtc.net (Software\/Web\/Mobile\/Big-Data\/Machine Learning) <\/a><br \/>\n\t\tShop Online: <a href='https:\/\/www.ShopForSoul.com'> https:\/\/www.ShopForSoul.com\/<\/a><br \/>\n\t\tMedium: <a href='https:\/\/medium.com\/@SayedAhmedCanada' target='new' rel=\"noopener\"> https:\/\/medium.com\/@SayedAhmedCanada <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&quot; Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function usinggradient descent, one takes steps proportional to the negative of the gradient (or approximategradient) of the function at the current point. Gradient descent &#8211; Wikipedia https:\/\/en.wikipedia.org\/wiki\/Gradient_descent &quot; Gradient Descend # From &hellip; <\/p>\n<p><a class=\"more-link btn\" href=\"http:\/\/bangla.sitestree.com\/?p=24915\">Continue reading<\/a><\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1917],"tags":[],"class_list":["post-24915","post","type-post","status-publish","format-standard","hentry","category-fromsitestree-com","item-wrap"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack-related-posts":[{"id":14558,"url":"http:\/\/bangla.sitestree.com\/?p=14558","url_meta":{"origin":24915,"position":0},"title":"Implement Gradient Descend:","author":"Sayed","date":"January 5, 2019","format":false,"excerpt":"\" Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function usinggradient descent, one takes steps proportional to the negative of the gradient (or approximategradient) of the function at the current point. Gradient descent - Wikipedia https:\/\/en.wikipedia.org\/wiki\/Gradient_descent\u2026","rel":"","context":"In &quot;Math and Statistics for Data Science, and Engineering&quot;","block_context":{"text":"Math and Statistics for Data Science, and Engineering","link":"http:\/\/bangla.sitestree.com\/?cat=1908"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":6222,"url":"http:\/\/bangla.sitestree.com\/?p=6222","url_meta":{"origin":24915,"position":1},"title":"\u09b8\u09bf \u098f\u09b8 \u098f\u09b8 \u09e9 \u0997\u09cd\u09b0\u09cd\u09af\u09be\u09a1\u09bf\u09df\u09c7\u09a8\u09cd\u099f\u09b8 (CSS3 Gradients)","author":"Author-Check- Article-or-Video","date":"February 9, 2015","format":false,"excerpt":"CSS3 gradients \u09a6\u09c1\u099f\u09bf color \u098f\u09b0 \u09ae\u09a7\u09cd\u09af\u09c7 \u09b8\u0982\u09af\u09cb\u0997 \u09b8\u09cd\u09a5\u09b2 \u0995\u09c7 \u09ae\u09b8\u09c3\u09a3 \u0995\u09b0\u09c7\u0964 \u098f\u0987 \u0995\u09be\u099c \u099f\u09bf \u0995\u09b0\u09be\u09b0 \u099c\u09a8\u09cd\u09af \u0986\u09aa\u09a8\u09be\u0995\u09c7 \u098f\u0995\u099f\u09bf image \u09ac\u09cd\u09af\u09be\u09ac\u09b9\u09be\u09b0 \u0995\u09b0\u09a4\u09c7 \u09b9\u09ac\u09c7\u0964 \u09a4\u09ac\u09c7 CSS3 gradient \u09ac\u09cd\u09af\u09be\u09ac\u09b9\u09be\u09b0 \u0995\u09b0\u09c7 \u0986\u09aa\u09a8\u09bf download \u098f\u09ac\u0982 bandwidth usage \u0995\u09ae\u09be\u09a4\u09c7 \u09aa\u09be\u09b0\u09ac\u09c7\u09a8\u0964 \u098f\u0995\u0987\u09b8\u09be\u09a5\u09c7 gradient \u098f\u09b0 \u09ae\u09be\u09a7\u09cd\u09af\u09ae\u09c7 \u0995\u09b0\u09be element \u0997\u09c1\u09b2\u09bf \u09ad\u09be\u09b2 \u09a6\u09c7\u0996\u09be\u09df \u09af\u0996\u09a8 zoom \u0995\u09b0\u09be \u09b9\u09df, \u0995\u09c7\u09a8\u09a8\u09be gradient \u09b8\u09ac\u09b8\u09ae\u09df\u2026","rel":"","context":"In &quot;\u09b8\u09bf \u098f\u09b8 \u098f\u09b8 \u099f\u09bf\u0989\u099f\u09cb\u09b0\u09bf\u09df\u09be\u09b2 (css tutorial)&quot;","block_context":{"text":"\u09b8\u09bf \u098f\u09b8 \u098f\u09b8 \u099f\u09bf\u0989\u099f\u09cb\u09b0\u09bf\u09df\u09be\u09b2 (css tutorial)","link":"http:\/\/bangla.sitestree.com\/?cat=174"},"img":{"alt_text":"chrome","src":"https:\/\/i0.wp.com\/www.w3schools.com\/images\/compatible_chrome.gif?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":17362,"url":"http:\/\/bangla.sitestree.com\/?p=17362","url_meta":{"origin":24915,"position":2},"title":"Parameter Optimization in\/using Neural Network","author":"Sayed","date":"August 27, 2020","format":false,"excerpt":"Parameter optimization in neural networks https:\/\/www.deeplearning.ai\/ai-notes\/optimization\/ Coding Deep Learning for Beginners \u2014 Linear Regression (Part 2): Cost Function https:\/\/towardsdatascience.com\/coding-deep-learning-for-beginners-linear-regression-part-2-cost-function-49545303d29f CBMM Tutorial: Optimization Notes https:\/\/cbmm.mit.edu\/sites\/default\/files\/documents\/CBMM_Optimization_Notes_2017.html Gradient descent https:\/\/www.jeremyjordan.me\/gradient-descent\/ Hyper-parameter Tuning Techniques in Deep Learning https:\/\/towardsdatascience.com\/hyper-parameter-tuning-techniques-in-deep-learning-4dad592c63c8 Model Parameters and Hyperparameters in Machine Learning \u2014 What is the difference? https:\/\/towardsdatascience.com\/model-parameters-and-hyperparameters-in-machine-learning-what-is-the-difference-702d30970f6 Lecture 8: Optimization\u2026","rel":"","context":"In &quot;\u09ac\u09cd\u09b2\u0997 \u0964 Blog&quot;","block_context":{"text":"\u09ac\u09cd\u09b2\u0997 \u0964 Blog","link":"http:\/\/bangla.sitestree.com\/?cat=182"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":10400,"url":"http:\/\/bangla.sitestree.com\/?p=10400","url_meta":{"origin":24915,"position":3},"title":"Draws a circle with a gradient fill","author":"","date":"August 28, 2015","format":false,"excerpt":"GradientPaintExample.java Draws a circle with a gradient fill. Inherits from ShapeExample.java. ************************************** import java.awt.*; \/** An example of applying a gradient fill to a circle. The \u00a0*\u00a0 color definition starts with red at (0,0), gradually \u00a0*\u00a0 changing to yellow at (175,175). \u00a0* \u00a0********************************** public class GradientPaintExample extends ShapeExample { \u00a0\u2026","rel":"","context":"In &quot;Code . Programming Samples . \u09aa\u09cd\u09b0\u09cb\u0997\u09cd\u09b0\u09be\u09ae \u0989\u09a6\u09be\u09b9\u09b0\u09a8&quot;","block_context":{"text":"Code . Programming Samples . \u09aa\u09cd\u09b0\u09cb\u0997\u09cd\u09b0\u09be\u09ae \u0989\u09a6\u09be\u09b9\u09b0\u09a8","link":"http:\/\/bangla.sitestree.com\/?cat=1417"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":26878,"url":"http:\/\/bangla.sitestree.com\/?p=26878","url_meta":{"origin":24915,"position":4},"title":"Draws a circle with a gradient fill #Programming Code Examples #Java\/J2EE\/J2ME #Drawing","author":"Author-Check- Article-or-Video","date":"May 4, 2021","format":false,"excerpt":"GradientPaintExample.java Draws a circle with a gradient fill. Inherits from ShapeExample.java. ************************************** import java.awt.*; \/** An example of applying a gradient fill to a circle. The * color definition starts with red at (0,0), gradually * changing to yellow at (175,175). * ********************************** public class GradientPaintExample extends ShapeExample { private\u2026","rel":"","context":"In &quot;FromSitesTree.com&quot;","block_context":{"text":"FromSitesTree.com","link":"http:\/\/bangla.sitestree.com\/?cat=1917"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":16296,"url":"http:\/\/bangla.sitestree.com\/?p=16296","url_meta":{"origin":24915,"position":5},"title":"Deep Learning (DL): can you answer these introductory questions on DL? Target: Starters in DL","author":"Sayed","date":"October 5, 2019","format":false,"excerpt":"Deep Learning - 001: Introduction to Deep Learning. Deep Learning (DL): can you answer these introductory questions on DL? Target: Starters in DL Can you define AI, ML, DL? Can you draw a diagram to show the relations of AI, ML, DL? What is Symbolic AI? Is Symbolic AI good\u2026","rel":"","context":"In &quot;\u09ac\u09cd\u09b2\u0997 \u0964 Blog&quot;","block_context":{"text":"\u09ac\u09cd\u09b2\u0997 \u0964 Blog","link":"http:\/\/bangla.sitestree.com\/?cat=182"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/24915","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=24915"}],"version-history":[{"count":0,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=\/wp\/v2\/posts\/24915\/revisions"}],"wp:attachment":[{"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=24915"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=24915"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/bangla.sitestree.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=24915"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}