|
Jan 27
Bloomberg: Theme of the Week
Jan 25
Change default browser for Jupyter Notebook
To Chrome
"
import webbrowser
webbrowser.register(‘chrome’, None, webbrowser.GenericBrowser(‘C:\Program Files (x86)\Google\Chrome\Application\chrome.exe’))
c.NotebookApp.browser = ‘chrome’
"
Jan 22
Recent Announcements from AWS
Jan 07
মেশিন লারনিং ঃ Implement: Multivariate Regression: Python
মেশিন লারনিং ঃ Implement: Multivariate Regression: Python
শত ভাগ সঠিক নাউ হতে পারে।
Theory reference: https://www.cmpe.boun.edu.tr/~ethem/i2ml/slides/v1-1/i2ml-chap5-v1-1.pdf . This is an approximate solution to start with. Understand the theory and then adjust/fix/improve
import numpy as np
import random
print(‘Iterations: rows: Please enter the number of samples for each variable/dimension’)
n_number_of_samples_rows = int(input())
print(‘Columns: Dimensions: Please enter number of variables’)
m_number_of_variables_cols = int(input())
# initialize the input data variables
print(m_number_of_variables_cols, n_number_of_samples_rows)
X = np.zeros((n_number_of_samples_rows, m_number_of_variables_cols))
#Y_actually_R = np.zeros((n_number_of_samples_rows, m_number_of_variables_cols))
Y_actually_R = np.zeros((n_number_of_samples_rows * 1)) #m_number_of_variables_cols
#generate random data
for n in range (n_number_of_samples_rows):
for m in range(m_number_of_variables_cols):
X[n,m] = random.random()
Y_actually_R[n] = random.random()
print(“X”)
print(X)
print(“Y in row format”)
print(Y_actually_R)
print(“Y in column/vector format”)
Y_actually_R = np.transpose(Y_actually_R) #transpose
print(Y_actually_R)
#convert to matrix
X = np.matrix(X)
Y_actually_R = np.matrix(Y_actually_R)
print(‘————-matrix-print—X and then Y’)
print(X)
print(‘Y matrix’)
print(Y_actually_R)
#THE EQUATION: steps to calculate W matrix: w = (((X.Transpose) * X).invert) * X.Transpose * r
#transpose X
X_transpose = np.transpose(X)
print(‘X_transpose’)
print(X_transpose)
#(X.Transpose) * X) of [w = (((X.Transpose) * X).invert) * X.Transpose * r]
w_parameters = np.dot(X_transpose, X) #does np.multiply work? probably need shapoing/reshaping
print(‘first dot’)
print(w_parameters)
#(X.Transpose) * X).invert
w_parameters = np.linalg.inv(w_parameters)
print(‘inverted’)
print(w_parameters)
#(((X.Transpose) * X).invert) * X.Transpose
w_parameters = np.dot(w_parameters, X_transpose)
print(‘2nd dot’)
print(w_parameters)
#(((X.Transpose) * X).invert) * X.Transpose * r
#Y_actually_R = np.transpose(Y_actually_R)
w_parameters = np.dot(w_parameters, np.transpose(Y_actually_R)) #np.dot( np.transpose(w_parameters), np.transpose(Y_actually_R))
#two times transpose – redundant. actually, we could avoid both transpose of Y upto this point
print(‘w_matrix’)
print(w_parameters)
w_matrix = w_parameters
#sum of ( rt – w0 – w1x1 – w2x2 ….. wd * xd ) rt = Y_Actually_R[t 1….N][variable_1….d]
#d eqv to m_number_of_variables_cols — i used m for that
#E(w 0 ,w 1 ,…,w d |X )
#Should it be a matrix or just one total sum? I assume one total sum
error_matrix = np.zeros(m_number_of_variables_cols)
error_sum = 0
#calculate error
Y_actually_R = np.transpose(Y_actually_R) #np.array(Y_actually_R)
for n in range(n_number_of_samples_rows):
sum = 0
for m in range(m_number_of_variables_cols):
sum = Y_actually_R[m]
#2nd part ie w1x1 w2x2 wdxd of the equation: sum of ( rt – w0 – w1x1 – w2x2 ….. wd * xd )
wpart = w_matrix[0]
for ii in range(1,m_number_of_variables_cols): # d = m_number_of_variables_cols, sum of w1x1, w2x2 to wdxd
wpart += w_matrix[ii] * X[n,m] #+ w_matrix[2][m] * X[n][m]
#sum = sum – wpart
sum = pow( (sum – wpart), 2)
error_matrix[m] = 0.5 * pow ( (sum – wpart), 2)
error_sum += sum #error_matrix[m] #pow ( (sum – wpart), 2)
error_sum = 0.5 * error_sum
print(‘error matrix if supposed to be = number of variables’)
print(error_matrix)
print(‘Error if supposed to be one number i.e. sum of all errors’)
print(error_sum)
Jan 06
Laravel Accountant Package, MyCLI, Development on an iPad, and More — №238
Building a Chatbot with Laravel and BotMan"Building a Chatbot with Laravel and Botman" is a hands on guide to building your own personal chatbot. " Accountant Laravel Package The Accountant composer package is a Laravel accountably package for your Eloquent models by developer Quetzy Garcia. MyCLI: A MySQL CLI with Auto-completion and Syntax Highlighting "If you use the MySQL command line tool, but also like things like auto-completion and syntax highlighting, you should check out mycli." Job Listings Senior Web and Contact Center Backend Developer Senior Laravel Developer Have you thought about casing? Laravel Cashier Missing Things for Stripe Subscription Real-time Chat System in Laravel WebSockets, Vue.js and Laravel-echo Did You Know: Five Additional Filters in belongsTo() or hasMany() Create Mocks for API Clients in Laravel Four Laravel Validation Rules for Images and Photos How to create a Backpack for Laravel Add-On Laravel, Cloudflare and Trusted Proxies Dynamic relationships in Laravel using subqueries Form-wrapper-js – library to manage you forms in Vue laravel-route-coverage-test Laravel Custom URLs Laravel Cascade Updates New Package Laravel-Searchable: Easily Search in Multiple Models Visual Studio Code Snippets for Backpack for Laravel Third annual North Meets South meets Dads in Dev meets TJ Miller meets Chris Gmyr Belated Christmas Extravaganza Podcast Last Month Laravel WebSockets Package Released Speeding Up PHP with OPcache in Docker Your Code is Broken. Sentry Can Fix It. (sponsor) Last YearBuilding a Vue SPA with Laravel Rainglow Editor Themes By Dayle Rees Two Years AgoLaravel Powered Blogging App Canvas Launches V3 Not Secure Warnings are Coming to Chrome 56 – Add an SSL to prevent it SameTime: Group Text Reminders App Built on Laravel |
Jan 05
ক্লাস্টার ম্যানেজার কোর্স ঃ Courses on Cluster Manager: Cluster Server Manager: Veritas : Solaris, and Similar
Course on Cluster Manager: Cluster Server Manager: Veritas : Solaris, and Similar
CCNA/CCNP/RHCE/MCSE are popular topics. However, for infrastructure jobs, cluster manager skills for sure will help.
List of cluster management software
https://en.wikipedia.org/wiki/List_of_cluster_management_software
Veritas Cluster Server 6.0 for Windows: Administration
https://www.globalknowledge.com/en-AE/Courses/Veritas/Storage/HA0435
Symantec Cluster Server
https://www.symantec.com/en/ca/products-solutions/training/product-training/detail.jsp?pkid=cluster_server
VERITAS CLUSTER SERVER 6.0 Administration Training & Certification Courses
https://www.koenig-solutions.com/veritas-cluster-server-6-administration-training-course.aspx
VERITAS CLUSTER SERVER 6.0/6.1
https://www.radicaltechnologies.co.in/high-availability/veritas-cluster-server-5-1-training-in-pune/
Veritas Cluster Server 6.x for Unix: Advanced Administration
https://www.learnquest.com/course-detail-v3.aspx?cnum=ha0414-e1xc
Veritas Cluster Manager
http://www.forscheredu.com/veritas-cluster-manager/
Microsoft Cluster Service Alternatives
https://www.itprotoday.com/compute-engines/microsoft-cluster-service-alternatives
Cluster Management Topics
http://haifux.org/lectures/168/linux-ha-clusters.html
Jan 05
ভলিউম ম্যানেজার ঃ ইনফ্রাস্ট্রাকচার জব এর জন্য
আমরা CCNA/CCNP/MCSE সম্পর্কে জানি। হয়ত ভলিউম ম্যানেজার সম্পর্কে জানি না।
Linux Logical Volume Manager (LVM)
https://www.udemy.com/linux-logical-volume-manager-lvm/
Solaris Volume Manager Administration
https://education.oracle.com/solaris-volume-manager-administration/courP_504
Veritas Volume Manager 6.1
http://www.krnetworkcloud.org/vvm.html
Solaris Volume Manager Administration Training & Certification Courses
Veritas Volume Manager Administration 6.0 for RHEL Training & Certification Courses
https://www.koenig-solutions.com/rhel-veritas-manager-vxvm-admin-6-training-course.aspx#tab2
HP-UX Logical Volume Manager
https://www.qa.com/training-courses/technical-it-training/hp/hp-hardware/hp-ux–hp-integrity/system-administrator/hp-ux-logical-volume-manager
Jan 05
Basic Matrix Operations:
Basic Matrix Operations:
def matrix_multiplication ( m, n ):
# Creates a list containing 5 lists, each of 8 items, all set to 0
o_row, o_col = len(m), len(n[0]);
output_matrix = [[0 for x in range(o_row)] for y in range(o_col)]
one_output_cell = 0
rowCount = 0
colCount = 0
temp = 0
for aRow in m:
colCount = 0
for aCol in range(len(n[0])):
for anItem in aRow:
temp += anItem * n[colCount][rowCount]
colCount += 1
output_matrix[rowCount][colCount] = temp
temp = 0
rowCount += 1
"""
for row in range(o_row):
out_col = 0
for col in range(o_col):
one_output_cell += m[row][col] * n[col][row]
output_matrix[row][out_col] = one_output_cell
one_output_cell = 0
out_col += 1
"""
return(output_matrix)
# 3 * 2
m = [
[1, 2],
[1,2],
[1,2]
]
# 2 * 3
n = [
[1, 2, 3],
[1, 2, 3 ]
]
m[2][1]*n[1][2]
print(n[0][0])
output_matrix = matrix_multiplication (m, n)
print(output_matrix)
Jan 05
Basic Numpy Operations
Basic Numpy Operations
import numpy as np
a = np.arange(15).reshape(3, 5)
print(a)
print(a.shape)
print(a.ndim)
print(a.dtype.name)
print(a.itemsize)
print(a.size)
print(type(a))
b = np.array([6, 7, 8])
print(b)
type(b)
#
Jan 05
Implement Gradient Descend:
"
Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function usinggradient descent, one takes steps proportional to the negative of the gradient (or approximategradient) of the function at the current point.
Gradient descent – Wikipedia
https://en.wikipedia.org/wiki/Gradient_descent
"
Gradient Descend
# From calculation, it is expected that the local minimum occurs at x=9/4
"""
cur_x = 6 # The algorithm starts at x=6
gamma = 0.01 # step size multiplier
precision = 0.00001
previous_step_size = 1
max_iters = 10000 # maximum number of iterations
iters = 0 #iteration counter
df = lambda x: 4 * x**3 – 9 * x**2
while previous_step_size > precision and iters < max_iters:
prev_x = cur_x
cur_x -= gamma * df(prev_x)
previous_step_size = abs(cur_x – prev_x)
iters+=1
print("The local minimum occurs at", cur_x)
#The output for the above will be: (‘The local minimum occurs at’, 2.2499646074278457)
"""
#—-
print(‘my part’)
co_ef = 6
iter = 0
max_iter = 1000
gamma = 0.001
step = 1
precision = 0.0000001
df = lambda x: 4 * x * x * x – 9 * x * x
while (iter <= max_iter) or (step >= precision ) :
prev_co_ef = co_ef
co_ef -= gamma * df (prev_co_ef)
step = abs (prev_co_ef – co_ef)
print(co_ef)
Sayed Ahmed
sayedum
Linkedin: https://ca.linkedin.com/in/sayedjustetc
Blog: http://sitestree.com, http://bangla.salearningschool.com















