Python: Ecommerce: Part — 3: Remove Unwanted Category (and Products), Also, remove products based on Words in the Title — after Merging All Supplier Data Files into One File

You could as well remove products that are not allowed in a country or in a market place as well as products that you are not authorized to sell (some brands)

All Code in One Block. Please check the other parts of this series/publication

The code could be simplified/reduced. You could join multiple blocks into one just by keeping the words/category names in a list; and then filtering against that list. You could as well join conditions using and (&) or or-operations (|) to reduce the number of lines of code.

# # Section Remove products that have slang words# In[24]:unique_sorted_data[‘Category Name’].unique()# In[30]:# Remove products from a category that you do not want to sell# Apparelunique_sorted_data_filter_category = unique_sorted_data [
 ~( unique_sorted_data[‘Category Name’].str.contains(“Apparel”, case = False, na=False ) )
 ];# android TV Boxunique_sorted_data_filter_category = unique_sorted_data_filter_category [
 ~( unique_sorted_data_filter_category[‘Category Name’].str.contains(“TV Box”, case = False, na=False ) )
 ];# Laser Products
unique_sorted_data_filter_category = unique_sorted_data_filter_category [
 ~( unique_sorted_data_filter_category[‘Category Name’].str.contains(“Laser”, case = False, na=False ) )
 ];# Costume
unique_sorted_data_filter_category = unique_sorted_data_filter_category [
 ~( unique_sorted_data_filter_category[‘Category Name’].str.contains(“Costume”, case = False, na=False ) )
 ];# Showing the count after removal
unique_sorted_data_filter_category[‘Category Name’].unique(), unique_sorted_data_filter_category.shape# In[31]:################## section Remove products that have slang/bad words in the name -- you do not need this block - was for testing only. Another block will do this jobunique_sorted_data_filter_1 = unique_sorted_data[ 
 ( unique_sorted_data[‘Full Product Name’].str.contains(“Slang 1“, case = False, na=False) )
 
 | ( unique_sorted_data[‘Full Product Name’].str.contains(“Slang 2“, case = False, na=False) )
 
 | ( unique_sorted_data[‘Full Product Name’].str.contains(“Slang 3“, case = False, na=False) )
 
 | ( unique_sorted_data[‘Full Product Name’].str.contains(“Slang 4“, case = False, na=False) )] #[ {‘Full Product Name’, ‘Category Name’}];
unique_sorted_data_filter_1.shape #, unique_sorted_data_filter_1.head(1) #, “\n “, unique_sorted_data_filter_1.shape######################### In[38]:# Remove products that have slang words in the product nameunique_sorted_filtered_data = unique_sorted_data_filter_category [
 ~( unique_sorted_data_filter_category[‘Full Product Name’].str.contains(“Bad word 1“, case = False, na=False ) ) ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 2“, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 3“, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 4”, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 5“, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 1“, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Tracker”, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Laser”, case = False, na=False ) )
 ];# brands that you do not want to sell
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“VKworld”, case = False, na=False ) )
 ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Samsung”, case = False, na=False ) )
 ];# video streaming TV Box HDMI -- products are sensitive (intellectual rights)
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Car Video”, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Streaming”, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
 ~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“HDMI”, case = False, na=False ) )
 ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data.shape# In[39]:unique_sorted_filtered_data.to_csv(“../all_supplier_data_unique_sorted_and_filtered.csv”);

From Jupyter Notebook: Cell by Cell with Output

Section Remove products that have slang/bad words or similar. Or Brands/Products that you do not want to sell (not allowed to sell/resell/restricted products or similar): I know the grammar is not right. Sorry, not fixing it.

In [24]:

unique_sorted_data['Category Name'].unique()

Out[24]:

array(['Toys & Games', 'Drone & Quadcopter', 'Cool Gadgets',
       'Office supplies', 'Novelty Costumes & Accessories',
       "Women's Jewelry", 'External Parts', 'Vehicle Electronics Devices',
       'Replacement Parts', 'Internal Parts', 'Lamps and Accessories',
       'Video Games', 'Hair Care', 'Skin Care',
       'Makeup Tool & Accessories', 'Household Products',
       "Women's Accessories", "Women's Apparel", 'Home accessories',
       'Oral Respiratory Protection', "Men's Accessories",
       "Men's Apparel", "Girl's Apparel", 'Cell Phone Accessories',
       'Health Care', 'Electronic Accessories', 'Health tools',
       'Computer Peripherals', 'Audio & Video Gadgets', nan,
       'Headrest Monitors & DVD Players', 'Car DVR',
       'Camera Equipment / Accessories', 'Personal Care',
       'Laser Gadgets & Measuring Tools', 'Accessories',
       'Electronic Cigarettes', 'Sports Action Camera',
       'Android TV Box / Stick', 'Sports & Body Building',
       'Smart Watches', 'Security & Surveillance', 'Android Tablets',
       'Musical Instruments & Accessorie', 'LED', 'Outdoor Recrections',
       'Tools & Home Decor', 'Home, Kitchen & Garden', 'Home Electrical',
       'Bedding & Bath', 'Camping & Hiking', 'Drives & Storage',
       'Pet Supplies', 'Hunting & Fishing', 'Garden & Lawn',
       'Medical treatments', 'Android Smartphones', 'Car Video',
       'Cell Phones', 'Cycling', 'Solar Products', 'Doogee Phones',
       'Rugged Phones', 'Ulefone Phones', 'Xiaomi Phones', 'Huawei Phone',
       'Lenovo Phones', 'Refurbished iPhones', 'Samsung Phones',
       'Water Sport', 'Tools & Equipment', 'Repair Accessories',
       'Body protection', 'Disinfection and sterilization', "Men's Care",
       'Cleaning Supplies', 'Baby Girls Apparel', "Women's Bags",
       "Women's Shoes", "Men's Jewelry", 'Baby Boys Apparel',
       "Boy's Apparel", "Girl's Shoes", "Girl's Jewelry", "Boy's Shoes",
       'kN95/KF94 Mask', 'Flash Drives + Memory Cards',
       '6-7 Inch Android Phones', 'Apple Phones', 'Xiaomi Phone',
       'Laptops & Tablets', 'Apple iPad', 'Musical Instruments',
       'Computer Accessories', 'Ball Games', "Boy's Jewelry"],
      dtype=object)

In [30]:

# Remove products from a category that you do not want to sellunique_sorted_data_filter_category = unique_sorted_data [ ~( unique_sorted_data['Category Name'].str.contains("Apparel", case = False, na=False ) )];​unique_sorted_data_filter_category = unique_sorted_data_filter_category [ ~( unique_sorted_data_filter_category['Category Name'].str.contains("TV Box", case = False, na=False ) )];unique_sorted_data_filter_category = unique_sorted_data_filter_category [~( unique_sorted_data_filter_category['Category Name'].str.contains("Laser", case = False, na=False ) )];unique_sorted_data_filter_category = unique_sorted_data_filter_category [ ~( unique_sorted_data_filter_category['Category Name'].str.contains("Costume", case = False, na=False ) )];unique_sorted_data_filter_category['Category Name'].unique(), unique_sorted_data_filter_category.shape

Out[30]:

(array(['Toys & Games', 'Drone & Quadcopter', 'Cool Gadgets',
        'Office supplies', "Women's Jewelry", 'External Parts',
        'Vehicle Electronics Devices', 'Replacement Parts',
        'Internal Parts', 'Lamps and Accessories', 'Video Games',
        'Hair Care', 'Skin Care', 'Makeup Tool & Accessories',
        'Household Products', "Women's Accessories", 'Home accessories',
        'Oral Respiratory Protection', "Men's Accessories",
        'Cell Phone Accessories', 'Health Care', 'Electronic Accessories',
        'Health tools', 'Computer Peripherals', 'Audio & Video Gadgets',
        nan, 'Headrest Monitors & DVD Players', 'Car DVR',
        'Camera Equipment / Accessories', 'Personal Care', 'Accessories',
        'Electronic Cigarettes', 'Sports Action Camera',
        'Sports & Body Building', 'Smart Watches',
        'Security & Surveillance', 'Android Tablets',
        'Musical Instruments & Accessorie', 'LED', 'Outdoor Recrections',
        'Tools & Home Decor', 'Home, Kitchen & Garden', 'Home Electrical',
        'Bedding & Bath', 'Camping & Hiking', 'Drives & Storage',
        'Pet Supplies', 'Hunting & Fishing', 'Garden & Lawn',
        'Medical treatments', 'Android Smartphones', 'Car Video',
        'Cell Phones', 'Cycling', 'Solar Products', 'Doogee Phones',
        'Rugged Phones', 'Ulefone Phones', 'Xiaomi Phones', 'Huawei Phone',
        'Lenovo Phones', 'Refurbished iPhones', 'Samsung Phones',
        'Water Sport', 'Tools & Equipment', 'Repair Accessories',
        'Body protection', 'Disinfection and sterilization', "Men's Care",
        'Cleaning Supplies', "Women's Bags", "Women's Shoes",
        "Men's Jewelry", "Girl's Shoes", "Girl's Jewelry", "Boy's Shoes",
        'kN95/KF94 Mask', 'Flash Drives + Memory Cards',
        '6-7 Inch Android Phones', 'Apple Phones', 'Xiaomi Phone',
        'Laptops & Tablets', 'Apple iPad', 'Musical Instruments',
        'Computer Accessories', 'Ball Games', "Boy's Jewelry"],
       dtype=object), (47826, 40))

In [31]:

# sect

In [38]:

# Remove products that have slang words in the product nameunique_sorted_filtered_data = unique_sorted_data_filter_category [ ~( unique_sorted_data_filter_category['Full Product Name'].str.contains("Bad word 1", case = False, na=False ) )];print(unique_sorted_filtered_data.shape);unique_sorted_filtered_data = unique_sorted_filtered_data [~( unique_sorted_filtered_data['Full Product Name'].str.contains("Bad word 2", case = False, na=False ) )];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [~( unique_sorted_filtered_data['Full Product Name'].str.contains("Bad word 3", case = False, na=False ) )];# product
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Tracker", case = False, na=False ) ) ];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Laser", case = False, na=False ) )];print(unique_sorted_filtered_data.shape);# remove brands
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("VKworld", case = False, na=False ) )];print(unique_sorted_filtered_data.shape);unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Samsung", case = False, na=False ) )];print(unique_sorted_filtered_data.shape);# video, streaming, HDMI
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Car Video", case = False, na=False ) )];print(unique_sorted_filtered_data.shape);unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Streaming", case = False, na=False ) )];print(unique_sorted_filtered_data.shape);unique_sorted_filtered_data = unique_sorted_filtered_data [~( unique_sorted_filtered_data['Full Product Name'].str.contains("HDMI", case = False, na=False ) )];print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data.shape(47001, 40)
(46988, 40)
(46968, 40)
(46968, 40)
(46959, 40)
(46959, 40)
(46441, 40)
(46388, 40)
(46381, 40)
(45357, 40)
(45349, 40)
(45338, 40)
(44911, 40)

Out[38]:

(44911, 40)

In [39]:

# send the filtered data to a file
unique_sorted_filtered_data.to_csv("../all_supplier_data_unique_sorted_and_filtered.csv");

***. ***. ***

Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***

Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)

MSc. in Comp. Sc. (U of Manitoba, Canada)

MSc. in Data Science and Analytics (Ryerson University, Canada)

Linkedinhttps://ca.linkedin.com/in/sayedjustetc

Bloghttp://Bangla.SaLearningSchool.comhttp://SitesTree.com

Training Courses: http://Training.SitesTree.com

8112223 Canada Inc/Justetchttp://JustEtc.net

Facebook Groups/Forums to discuss (Q & A):

https://www.facebook.com/banglasalearningschool

https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Build Ecommerce Software and Systems

Build Ecommerce Software and Systems

8112223 Canada Inc. (Justetc)

WRITTEN BY

Software Engineer, Data Scientist, Machine Learning Engineer.

Build Ecommerce Software and Systems

Build Ecommerce Software and Systems

Write the first response

Python: Ecommerce: Part — 2: Drop Duplicates, Sort, and Take Only Unique Products After Merging All Supplier D ata Files into One File

All code in One Block

# # Section: Verify, and Process Supplier Data Before Sending products to 
# # your retail (Magento 2) or marketplace (Amazon, Walmart)# In[7]:# combined_csv.sort_values(“Model Code”, inplace = True) 
# dropping ALL duplicte values based on Product SKU = Model Codeno_duplicates_combined_csv = combined_csv.drop_duplicates(subset = “Model Code”, 
 keep = False, inplace = False);
no_duplicates_combined_csv.shape# In[8]:#55690 vs 55527# In[9]:no_duplicates_combined_csv_verify = combined_csv;
type(no_duplicates_combined_csv_verify)# In[10]:# verify the shape after dropping duplicates
no_duplicates_combined_csv_verify.drop_duplicates(subset = “Model Code”, 
 keep = False, inplace = True);
len(no_duplicates_combined_csv_verify)# In[11]:#55690 vs 55527# In[12]:# show combined data : show first 10 rows
no_duplicates_combined_csv[:3]# In[16]:# Stop# # Find only the unique products, sorted and duplicate removed# In[14]:# sorting by SKU = Model Code
sorted_merged_data = no_duplicates_combined_csv.sort_values(“Model Code”, inplace = False) 
sorted_merged_data.head()# dropping ALL duplicte values : No need here. Though old code : keeping it anyway
unique_sorted_data = sorted_merged_data.drop_duplicates(subset =”Model Code”, keep = False, inplace = False) 
unique_sorted_data.head(3)# In[15]:# total data count at this point
unique_sorted_data.shape

From Jupyter Notebook: Cell by Cell with output

Section: Verify, and Process Supplier Data Before Sending products to your retail (Magento 2) or marketplace (Amazon, Walmart)

In [7]:

# combined_csv.sort_values("Model Code", inplace = True)# dropping ALL duplicte values based on Product SKU = Model Codeno_duplicates_combined_csv = combined_csv.drop_duplicates(subset = "Model Code", keep = False, inplace = False);no_duplicates_combined_csv.shape

Out[7]:

(55527, 40)

In [8]:

#55690 vs 55527

In [9]:

no_duplicates_combined_csv_verify = combined_csv;type(no_duplicates_combined_csv_verify)

Out[9]:

pandas.core.frame.DataFrame

In [10]:

# verify the shape after dropping duplicatesno_duplicates_combined_csv_verify.drop_duplicates(subset = "Model Code", keep = False, inplace = True);len(no_duplicates_combined_csv_verify)

Out[10]:

55527

In [11]:

#55690 vs 55527

In [12]:

# show combined data : show first 10 rowsno_duplicates_combined_csv[:3]

Out[12]:

Product IDModel CodeFull Product NameShort Product NameProduct URLCategory NameCategory URLSubcategory NameSubcategory URLDate Product Was Launched…Related ProductsRelated AccessoriesWeight KgHeight mmWidth mmDepth mmVideo linkRetail PriceStock statusDate Back0107890POU_0850GV7YPull Rope Fitness Exercises Resistance Bands L…Pull Rope Fitness Exercises Resistance Bands L…

3 rows × 40 columns

In [16]:

# Stop

Find only the unique products, sorted and duplicate removed

In [14]:

# sorting by SKU = Model Codesorted_merged_data = no_duplicates_combined_csv.sort_values("Model Code", inplace = False)sorted_merged_data.head()# dropping ALL duplicate values :  No need here. Though old code : keeping it anywayunique_sorted_data = sorted_merged_data.drop_duplicates(subset ="Model Code",  keep = False, inplace = False)unique_sorted_data.head(3)

Out[14]:

Product IDModel CodeFull Product NameShort Product NameProduct URLCategory NameCategory URLSubcategory NameSubcategory URLDate Product Was Launched…Related ProductsRelated AccessoriesWeight KgHeight mmWidth mmDepth mmVideo linkRetail PriceStock statusDate Back899230399A01AL3301111Black 3x3x3 MoYu AoLong V2 PuzzleBlack 3x3x3 MoYu AoLong V2 Puzzle

3 rows × 40 columns

In [15]:

# total data count at this pointunique_sorted_data.shape

Out[15]:

(55527, 40)

***. ***. ***

Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***

Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)

MSc. in Comp. Sc. (U of Manitoba, Canada)

MSc. in Data Science and Analytics (Ryerson University, Canada)

Linkedinhttps://ca.linkedin.com/in/sayedjustetc

Bloghttp://Bangla.SaLearningSchool.comhttp://SitesTree.com

Training Courses: http://Training.SitesTree.com

8112223 Canada Inc/Justetchttp://JustEtc.net

Facebook Groups/Forums to discuss (Q & A):

https://www.facebook.com/banglasalearningschool

https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Build Ecommerce Software and Systems

Build Ecommerce Software and Systems

8112223 Canada Inc. (Justetc)

WRITTEN BY

Software Engineer, Data Scientist, Machine Learning Engineer.

Build Ecommerce Software and Systems

Build Ecommerce Software and Systems

Write the first response

Python: Ecommerce: Part — 1: Merge Multiple Supplier Data Files into One File

Section: Merge multiple Supplier Data Files

All code in one block

#!/usr/bin/env python
# coding: utf-8# # Section: Merge multiple Supplier Data Files
## In[1]:# if there is a need to merge multiple files — use this block
import os;
import glob;
import pandas as pd;# supplier data files/feeds are kept here
data_folder = ‘data-supplier-2019–04–14/supplier-raw-data/’;
os.chdir(data_folder);# In[6]:# show all data feed file name
# file extension for supplier data file
extension = ‘csv’;
all_filenames = [i for i in glob.glob(‘*.{}’.format(extension))]
all_filenames# In[7]:# total number of rows combined all data files/feeds
row_total_count = 0
for f in all_filenames:
 df_s = pd.read_csv(f)
 print(df_s.shape, f)
 row_total_count += df_s.shape[0]
row_total_count # print(row_total_count)# In[8]:# combine all files in the list
combined_csv = pd.concat([pd.read_csv(f) for f in all_filenames]);
combined_csv.shape# In[10]:# export combined data to a csv file
combined_csv.to_csv( “../all_supplier_products_2019_04_14.csv”, index=False, encoding=’utf-8-sig’)# In[13]:# read csv data file and show data on the screen
df = pd.read_csv(‘../all_supplier_products_2019_04_14.csv’);
df.head()

The following is from Jupyter Notebook: Cell By Cell Display. Output data are also shown

In [1]:

# if there is a need to merge multiple files -- use this blockimport os;import glob;import pandas as pd;# supplier data files/feeds are kept heredata_folder = 'data-supplier-2019-04-14/supplier-raw-data/';os.chdir(data_folder);

In [6]:

# show all data feed file name# file extension for supplier data fileextension = 'csv';all_filenames = [i for i in glob.glob('*.{}'.format(extension))]all_filenames

Out[6]:

['data_feeds_5e95c25a1f7f6.csv',
 'data_feeds_5e95c2962d471.csv',
 'data_feeds_5e95c2d255409.csv',
 'data_feeds_5e95c30e63423.csv',
 'data_feeds_5e95c38646478.csv',
 'data_feeds_5e95c5dd76370.csv']

In [7]:

# total number of rows combined all data files/feedsrow_total_count = 0for f in all_filenames:df_s = pd.read_csv(f)print(df_s.shape, f)row_total_count += df_s.shape[0]row_total_count # print(row_total_count)(8058, 40) data_feeds_5e95c25a1f7f6.csv
(7, 40) data_feeds_5e95c2962d471.csv
(1, 40) data_feeds_5e95c2d255409.csv... ....
(1072, 40) data_feeds_5e95c565d6e30.csv
(4833, 40) data_feeds_5e95c5dd76370.csv

Out[7]:

55690

In [8]:

# combine all files in the listcombined_csv = pd.concat([pd.read_csv(f) for f in all_filenames]);combined_csv.shape

Out[8]:

(55690, 40)

In [10]:

# export combined data to a csv filecombined_csv.to_csv( "../all_supplier_products_2019_04_14.csv", index=False, encoding='utf-8-sig')

In [13]:

df = pd.read_csv('../all_supplier_products_2019_04_14.csv');df.head()

Out[13]:

Product ID Model Code Full Product NameShort Product NameProduct URLCategory NameCategory URLSubcategory NameSubcategory URLDate Product Was Launched…Related ProductsRelated AccessoriesWeight KgHeight mmWidth mmDepth mmVideo linkRetail PriceStock statusDate Back0107890POU_0850GV7YPull Rope Fitness Exercises Resistance Bands L…Pull Rope Fitness

***. ***. ***

Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***

Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)

MSc. in Comp. Sc. (U of Manitoba, Canada)

MSc. in Data Science and Analytics (Ryerson University, Canada)

Linkedinhttps://ca.linkedin.com/in/sayedjustetc

Bloghttp://Bangla.SaLearningSchool.comhttp://SitesTree.com

Training Courses: http://Training.SitesTree.com

8112223 Canada Inc/Justetchttp://JustEtc.net

Facebook Groups/Forums to discuss (Q & A):

https://www.facebook.com/banglasalearningschool

https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Build Ecommerce Software and Systems

Build Ecommerce Software and Systems

8112223 Canada Inc. (Justetc)

WRITTEN BY

Software Engineer, Data Scientist, Machine Learning Engineer.

Build Ecommerce Software and Systems

Build Ecommerce Software and Systems

Python: Ecommerce: Part — 1: Merge Multiple Supplier Data Files into One File

Python: Ecommerce: Part — 1: Merge Multiple Supplier Data Files into One File

Section: Merge multiple Supplier Data Files

All code in one block

#!/usr/bin/env python
# coding: utf-8
# # Section: Merge multiple Supplier Data Files
#
# In[1]:
# if there is a need to merge multiple files — use this block
import os;
import glob;
import pandas as pd;
# supplier data files/feeds are kept here
data_folder = ‘data-supplier-2019–04–14/supplier-raw-data/’;
os.chdir(data_folder);
# In[6]:
# show all data feed file name
# file extension for supplier data file
extension = ‘csv’;
all_filenames = [i for i in glob.glob(‘*.{}’.format(extension))]
all_filenames
# In[7]:
# total number of rows combined all data files/feeds
row_total_count = 0
for f in all_filenames:
df_s = pd.read_csv(f)
print(df_s.shape, f)
row_total_count += df_s.shape[0]
row_total_count # print(row_total_count)
# In[8]:
# combine all files in the list
combined_csv = pd.concat([pd.read_csv(f) for f in all_filenames]);
combined_csv.shape
# In[10]:
# export combined data to a csv file
combined_csv.to_csv( “../all_supplier_products_2019_04_14.csv”, index=False, encoding=’utf-8-sig’)
# In[13]:
# read csv data file and show data on the screen
df = pd.read_csv(‘../all_supplier_products_2019_04_14.csv’);
df.head()

The following is from Jupyter Notebook: Cell By Cell Display. Output data are also shown

In [1]:

# if there is a need to merge multiple files -- use this block
import os;
import glob;
import pandas as pd;

# supplier data files/feeds are kept here
data_folder = 'data-supplier-2019-04-14/supplier-raw-data/';
os.chdir(data_folder);

In [6]:

# show all data feed file name
# file extension for supplier data file
extension = 'csv';
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
all_filenames

Out[6]:

['data_feeds_5e95c25a1f7f6.csv',
'data_feeds_5e95c2962d471.csv',
'data_feeds_5e95c2d255409.csv',
'data_feeds_5e95c30e63423.csv',
'data_feeds_5e95c38646478.csv',
'data_feeds_5e95c5dd76370.csv']

In [7]:

# total number of rows combined all data files/feeds
row_total_count = 0
for f in all_filenames:
df_s = pd.read_csv(f)
print(df_s.shape, f)
row_total_count += df_s.shape[0]
row_total_count # print(row_total_count)
(8058, 40) data_feeds_5e95c25a1f7f6.csv
(7, 40) data_feeds_5e95c2962d471.csv
(1, 40) data_feeds_5e95c2d255409.csv
... ....
(1072, 40) data_feeds_5e95c565d6e30.csv
(4833, 40) data_feeds_5e95c5dd76370.csv

Out[7]:

55690

In [8]:

# combine all files in the list
combined_csv = pd.concat([pd.read_csv(f) for f in all_filenames]);
combined_csv.shape

Out[8]:

(55690, 40)

In [10]:

# export combined data to a csv file
combined_csv.to_csv( "../all_supplier_products_2019_04_14.csv", index=False, encoding='utf-8-sig')

In [13]:

df = pd.read_csv('../all_supplier_products_2019_04_14.csv');
df.head()

Out[13]:

Product ID Model Code Full Product NameShort Product NameProduct URLCategory NameCategory URLSubcategory NameSubcategory URLDate Product Was Launched…Related ProductsRelated AccessoriesWeight KgHeight mmWidth mmDepth mmVideo linkRetail PriceStock statusDate Back0107890POU_0850GV7YPull Rope Fitness Exercises Resistance Bands L…Pull Rope Fitness

***. ***. ***
Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***
Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)
MSc. in Comp. Sc. (U of Manitoba, Canada)
MSc. in Data Science and Analytics (Ryerson University, Canada)
Linkedin: https://ca.linkedin.com/in/sayedjustetc

Blog: http://Bangla.SaLearningSchool.com, http://SitesTree.com
Training Courses: http://Training.SitesTree.com
8112223 Canada Inc/Justetc: http://JustEtc.net

Facebook Groups/Forums to discuss (Q & A):
https://www.facebook.com/banglasalearningschool
https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Python: Ecommerce: Part — 3: Remove Unwanted Category (and Products), Also, remove products based on Words in the Title — after Merging All Supplier Data Files into One File

Python: Ecommerce: Part — 3: Remove Unwanted Category (and Products), Also, remove products based on Words in the Title — after Merging All Supplier Data Files into One File

Python: Ecommerce: Part — 3: Remove Unwanted Category (and Products), Also, remove products based on Words in the Title — after Merging All Supplier Data Files into One File.

You could as well remove products that are not allowed in a country or in a market place as well as products that you are not authorized to sell (some brands)

All Code in One Block. Please check the other parts of this series/publication

The code could be simplified/reduced. You could join multiple blocks into one just by keeping the words/category names in a list; and then filtering against that list. You could as well join conditions using and (&) or or-operations (|) to reduce the number of lines of code.

# # Section Remove products that have slang words
# In[24]:
unique_sorted_data[‘Category Name’].unique()
# In[30]:
# Remove products from a category that you do not want to sell
# Apparel
unique_sorted_data_filter_category = unique_sorted_data [
~( unique_sorted_data[‘Category Name’].str.contains(“Apparel”, case = False, na=False ) ) ];
# android TV Box
unique_sorted_data_filter_category = unique_sorted_data_filter_category [
~( unique_sorted_data_filter_category[‘Category Name’].str.contains(“TV Box”, case = False, na=False ) ) ];
# Laser Products
unique_sorted_data_filter_category = unique_sorted_data_filter_category [
~( unique_sorted_data_filter_category[‘Category Name’].str.contains(“Laser”, case = False, na=False ) ) ];
# Costume
unique_sorted_data_filter_category = unique_sorted_data_filter_category [
~( unique_sorted_data_filter_category[‘Category Name’].str.contains(“Costume”, case = False, na=False ) ) ];
# Showing the count after removal
unique_sorted_data_filter_category[‘Category Name’].unique(), unique_sorted_data_filter_category.shape
# In[31]:
#################
# section Remove products that have slang/bad words in the name -- you do not need this block - was for testing only. Another block will do this job
unique_sorted_data_filter_1 = unique_sorted_data[ 
( unique_sorted_data[‘Full Product Name’].str.contains(“Slang 1“, case = False, na=False) ) | ( unique_sorted_data[‘Full Product Name’].str.contains(“Slang 2“, case = False, na=False) ) | ( unique_sorted_data[‘Full Product Name’].str.contains(“Slang 3“, case = False, na=False) ) | ( unique_sorted_data[‘Full Product Name’].str.contains(“Slang 4“, case = False, na=False) )
] #[ {‘Full Product Name’, ‘Category Name’}];
unique_sorted_data_filter_1.shape #, unique_sorted_data_filter_1.head(1) #, “\n “, unique_sorted_data_filter_1.shape
########################
# In[38]:
# Remove products that have slang words in the product name
unique_sorted_filtered_data = unique_sorted_data_filter_category [
~( unique_sorted_data_filter_category[‘Full Product Name’].str.contains(“Bad word 1“, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 2“, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 3“, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 4”, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 5“, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Bad word 1“, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Tracker”, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Laser”, case = False, na=False ) ) ];
# brands that you do not want to sell
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“VKworld”, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Samsung”, case = False, na=False ) ) ];
# video streaming TV Box HDMI -- products are sensitive (intellectual rights)
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Car Video”, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“Streaming”, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [
~( unique_sorted_filtered_data[‘Full Product Name’].str.contains(“HDMI”, case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data.shape
# In[39]:
unique_sorted_filtered_data.to_csv(“../all_supplier_data_unique_sorted_and_filtered.csv”);

From Jupyter Notebook: Cell by Cell with Output

Section Remove products that have slang/bad words or similar. Or Brands/Products that you do not want to sell (not allowed to sell/resell/restricted products or similar): I know the grammar is not right. Sorry, not fixing it.

In [24]:

unique_sorted_data['Category Name'].unique()

Out[24]:

array(['Toys & Games', 'Drone & Quadcopter', 'Cool Gadgets',
'Office supplies', 'Novelty Costumes & Accessories',
"Women's Jewelry", 'External Parts', 'Vehicle Electronics Devices',
'Replacement Parts', 'Internal Parts', 'Lamps and Accessories',
'Video Games', 'Hair Care', 'Skin Care',
'Makeup Tool & Accessories', 'Household Products',
"Women's Accessories", "Women's Apparel", 'Home accessories',
'Oral Respiratory Protection', "Men's Accessories",
"Men's Apparel", "Girl's Apparel", 'Cell Phone Accessories',
'Health Care', 'Electronic Accessories', 'Health tools',
'Computer Peripherals', 'Audio & Video Gadgets', nan,
'Headrest Monitors & DVD Players', 'Car DVR',
'Camera Equipment / Accessories', 'Personal Care',
'Laser Gadgets & Measuring Tools', 'Accessories',
'Electronic Cigarettes', 'Sports Action Camera',
'Android TV Box / Stick', 'Sports & Body Building',
'Smart Watches', 'Security & Surveillance', 'Android Tablets',
'Musical Instruments & Accessorie', 'LED', 'Outdoor Recrections',
'Tools & Home Decor', 'Home, Kitchen & Garden', 'Home Electrical',
'Bedding & Bath', 'Camping & Hiking', 'Drives & Storage',
'Pet Supplies', 'Hunting & Fishing', 'Garden & Lawn',
'Medical treatments', 'Android Smartphones', 'Car Video',
'Cell Phones', 'Cycling', 'Solar Products', 'Doogee Phones',
'Rugged Phones', 'Ulefone Phones', 'Xiaomi Phones', 'Huawei Phone',
'Lenovo Phones', 'Refurbished iPhones', 'Samsung Phones',
'Water Sport', 'Tools & Equipment', 'Repair Accessories',
'Body protection', 'Disinfection and sterilization', "Men's Care",
'Cleaning Supplies', 'Baby Girls Apparel', "Women's Bags",
"Women's Shoes", "Men's Jewelry", 'Baby Boys Apparel',
"Boy's Apparel", "Girl's Shoes", "Girl's Jewelry", "Boy's Shoes",
'kN95/KF94 Mask', 'Flash Drives + Memory Cards',
'6-7 Inch Android Phones', 'Apple Phones', 'Xiaomi Phone',
'Laptops & Tablets', 'Apple iPad', 'Musical Instruments',
'Computer Accessories', 'Ball Games', "Boy's Jewelry"], dtype=object)

In [30]:

# Remove products from a category that you do not want to sell
unique_sorted_data_filter_category = unique_sorted_data [ ~( unique_sorted_data['Category Name'].str.contains("Apparel", case = False, na=False ) )];

unique_sorted_data_filter_category = unique_sorted_data_filter_category [ ~( unique_sorted_data_filter_category['Category Name'].str.contains("TV Box", case = False, na=False ) )];

unique_sorted_data_filter_category = unique_sorted_data_filter_category [~( unique_sorted_data_filter_category['Category Name'].str.contains("Laser", case = False, na=False ) )];

unique_sorted_data_filter_category = unique_sorted_data_filter_category [ ~( unique_sorted_data_filter_category['Category Name'].str.contains("Costume", case = False, na=False ) )
];

unique_sorted_data_filter_category['Category Name'].unique(), unique_sorted_data_filter_category.shape

Out[30]:

(array(['Toys & Games', 'Drone & Quadcopter', 'Cool Gadgets',
'Office supplies', "Women's Jewelry", 'External Parts',
'Vehicle Electronics Devices', 'Replacement Parts',
'Internal Parts', 'Lamps and Accessories', 'Video Games',
'Hair Care', 'Skin Care', 'Makeup Tool & Accessories',
'Household Products', "Women's Accessories", 'Home accessories',
'Oral Respiratory Protection', "Men's Accessories",
'Cell Phone Accessories', 'Health Care', 'Electronic Accessories',
'Health tools', 'Computer Peripherals', 'Audio & Video Gadgets',
nan, 'Headrest Monitors & DVD Players', 'Car DVR',
'Camera Equipment / Accessories', 'Personal Care', 'Accessories',
'Electronic Cigarettes', 'Sports Action Camera',
'Sports & Body Building', 'Smart Watches',
'Security & Surveillance', 'Android Tablets',
'Musical Instruments & Accessorie', 'LED', 'Outdoor Recrections',
'Tools & Home Decor', 'Home, Kitchen & Garden', 'Home Electrical',
'Bedding & Bath', 'Camping & Hiking', 'Drives & Storage',
'Pet Supplies', 'Hunting & Fishing', 'Garden & Lawn',
'Medical treatments', 'Android Smartphones', 'Car Video',
'Cell Phones', 'Cycling', 'Solar Products', 'Doogee Phones',
'Rugged Phones', 'Ulefone Phones', 'Xiaomi Phones', 'Huawei Phone',
'Lenovo Phones', 'Refurbished iPhones', 'Samsung Phones',
'Water Sport', 'Tools & Equipment', 'Repair Accessories',
'Body protection', 'Disinfection and sterilization', "Men's Care",
'Cleaning Supplies', "Women's Bags", "Women's Shoes",
"Men's Jewelry", "Girl's Shoes", "Girl's Jewelry", "Boy's Shoes",
'kN95/KF94 Mask', 'Flash Drives + Memory Cards',
'6-7 Inch Android Phones', 'Apple Phones', 'Xiaomi Phone',
'Laptops & Tablets', 'Apple iPad', 'Musical Instruments',
'Computer Accessories', 'Ball Games', "Boy's Jewelry"], dtype=object), (47826, 40))

In [31]:

# sect

In [38]:

# Remove products that have slang words in the product name
unique_sorted_filtered_data = unique_sorted_data_filter_category [ ~( unique_sorted_data_filter_category['Full Product Name'].str.contains("Bad word 1", case = False, na=False ) )];

print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [~( unique_sorted_filtered_data['Full Product Name'].str.contains("Bad word 2", case = False, na=False ) )];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [~( unique_sorted_filtered_data['Full Product Name'].str.contains("Bad word 3", case = False, na=False ) )];
# product
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Tracker", case = False, na=False ) ) ];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Laser", case = False, na=False ) )];
print(unique_sorted_filtered_data.shape);
# remove brands
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("VKworld", case = False, na=False ) )];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Samsung", case = False, na=False ) )];
print(unique_sorted_filtered_data.shape);
# video, streaming, HDMI
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Car Video", case = False, na=False ) )];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [ ~( unique_sorted_filtered_data['Full Product Name'].str.contains("Streaming", case = False, na=False ) )];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data = unique_sorted_filtered_data [~( unique_sorted_filtered_data['Full Product Name'].str.contains("HDMI", case = False, na=False ) )];
print(unique_sorted_filtered_data.shape);
unique_sorted_filtered_data.shape
(47001, 40)
(46988, 40)
(46968, 40)
(46968, 40)
(46959, 40)
(46959, 40)
(46441, 40)
(46388, 40)
(46381, 40)
(45357, 40)
(45349, 40)
(45338, 40)
(44911, 40)

Out[38]:

(44911, 40)

In [39]:

# send the filtered data to a file
unique_sorted_filtered_data.to_csv("../all_supplier_data_unique_sorted_and_filtered.csv");

***. ***. ***
Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***
Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)
MSc. in Comp. Sc. (U of Manitoba, Canada)
MSc. in Data Science and Analytics (Ryerson University, Canada)
Linkedin: https://ca.linkedin.com/in/sayedjustetc

Blog: http://Bangla.SaLearningSchool.com, http://SitesTree.com
Training Courses: http://Training.SitesTree.com
8112223 Canada Inc/Justetc: http://JustEtc.net

Facebook Groups/Forums to discuss (Q & A):
https://www.facebook.com/banglasalearningschool
https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Modeling and Optimization : Gurobi, Cplex in addition to Matlab

Modeling and Optimization : Gurobi, Cplex in addition to Matlab

Python in general can be a better choice for Gurobi, and CPlex. Gurobi works with Matlab as well.

You can develop Gurobi applications in Anaconda, and Jupyter.

GUROBI OPTIMIZER QUICK START GUIDE
https://www.gurobi.com/wp-content/plugins/hd_documentations/content/pdf/quickstart_linux_8.1.pdf

Starting with CPLEX
https://www.ibm.com/support/knowledgecenter/SSSA5P_12.7.1/ilog.odms.studio.help/Optimization_Studio/topics/COS_home.html

https://www.ibm.com/support/knowledgecenter/SSSA5P_12.7.1/ilog.odms.cplex.help/CPLEX/GettingStarted/topics/set_up/Python_setup.html

Gurobi: Reference manual
https://www.gurobi.com/documentation/8.1/refman/index.html

Gurobi Optimizer Example Tour

https://www.gurobi.com/documentation/8.1/examples/index.html

Books on Modeling and Optimization

Introduction to Linear Optimization (Athena Scientific Series in Optimization and Neural Computation, 6)

https://www.amazon.com/Introduction-Linear-Optimization-Scientific-Computation/dp/1886529191/

Linear Programming: Foundations and Extensions (International Series in Operations Research & Management Science)

https://www.amazon.com/Linear-Programming-Foundations-Extensions-International/dp/0387743871/

Primal-Dual Interior-Point Methods

https://www.amazon.com/Primal-Dual-Interior-Point-Methods-Stephen-Wright/dp/089871382X/

Integer Programming

https://www.amazon.com/Integer-Programming-Laurence-Wolsey/dp/0471283665/

Model Building in Mathematical Programming

https://www.amazon.com/Model-Building-Mathematical-Programming-Williams/dp/1118443330/

Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***
Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)
MSc. in Comp. Sc. (U of Manitoba, Canada)
MSc. in Data Science and Analytics (Ryerson University, Canada)
Linkedin: https://ca.linkedin.com/in/sayedjustetc

Blog: http://Bangla.SaLearningSchool.com, http://SitesTree.com
Training Courses: http://Training.SitesTree.com

Facebook Groups/Forums to discuss (Q & A):
https://www.facebook.com/banglasalearningschool
https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Gurobi: Optimization: Learn from Examples

Gurobi: Optimization: Learn from Examples

View the modeling examples:


***. ***. ***
Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***
Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)
MSc. in Comp. Sc. (U of Manitoba, Canada)
MSc. in Data Science and Analytics (Ryerson University, Canada)
Linkedin: https://ca.linkedin.com/in/sayedjustetc

Blog: http://Bangla.SaLearningSchool.com, http://SitesTree.com
Training Courses: http://Training.SitesTree.com

Facebook Groups/Forums to discuss (Q & A):
https://www.facebook.com/banglasalearningschool
https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Genetic Algorithm, Particle Swarm

From the Internet

Genetic: Python
https://www.youtube.com/watch?v=zumC_C0C25c

Particle Swarm
https://www.youtube.com/watch?v=xPkRL_Gt6PI

Compare:
https://www.frontiersin.org/articles/10.3389/fbuil.2019.00113/full

***
other genetic algorithm tutorials:
Genetic Algorithms w/ Java – Tutorial 01 @ https://youtu.be/UcVJsV-tqlo
Genetic Algorithms w/ Scala – Tutorial 01 @ https://youtu.be/6ngju74tHbI
Genetic Algorithms w/ JAVA + JavaFX – Tutorial 02 @ https://youtu.be/0VijcOA5ZAY
TSP By Genetic Algorithms w/ JAVA @ https://youtu.be/Z3668A0zLCM
Genetic Algorithms Tutorial 04 – Class Scheduling JAVA Application @ https://youtu.be/cn1JyZvV5YA
Genetic Algorithms Tutorial 05 – Robotics JAVA Application @ https://youtu.be/mhtTOp8rWOU
Genetic Algorithms Tutorial 06 – data mining + JAVA 8 + logical operators @ https://youtu.be/Ha9xddd7xTY
Genetic Algorithms Tutorial 07 – data mining + arithmetic operators + JAVA @ https://youtu.be/azdlGExeXKc
Class Scheduling w/ Genetic Algorithm + SQLite DB + JAVA @ https://youtu.be/PQPs-vJuHC0
Genetic Algorithms w/ Python – Tutorial 01 @ https://youtu.be/zumC_C0C25c
Genetic Algorithms Class Scheduling w/ Python Tutorial @ https://youtu.be/8NrNX_jCkjw
Genetic Algorithm + SQLite DB + Python + Class Scheduling @ https://youtu.be/6SDQdx5VLO8

Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***
Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)
MSc. in Comp. Sc. (U of Manitoba, Canada)
MSc. in Data Science and Analytics (Ryerson University, Canada)
Linkedin: https://ca.linkedin.com/in/sayedjustetc

Blog: http://Bangla.SaLearningSchool.com, http://SitesTree.com
Training Courses: http://Training.SitesTree.com

Facebook Groups/Forums to discuss (Q & A):
https://www.facebook.com/banglasalearningschool
https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Learn: How to formulate an Optimization Problem

Formulating an Optimization Problem

http://macc.mcmaster.ca/maccfiles/chachuatnotes/01-Formulation_handout.pdf

Search Results

Web results

Math 407A: Linear Optimization – University of Washington

: how to formulate

https://sites.math.washington.edu/~burke/crs/407/lectures/L3-LP_Modeling.pdf

Optimization with Examples

https://sites.math.washington.edu/~burke/crs/407/notes/section1.pdf

https://sites.math.washington.edu/~burke/crs/407/notes/section1.pdf
https://sites.math.washington.edu/~burke/crs/407/notes/section2.pdf
https://sites.math.washington.edu/~burke/crs/407/models/

Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***
Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)
MSc. in Comp. Sc. (U of Manitoba, Canada)
MSc. in Data Science and Analytics (Ryerson University, Canada)
Linkedin: https://ca.linkedin.com/in/sayedjustetc

Blog: http://Bangla.SaLearningSchool.com, http://SitesTree.com
Training Courses: http://Training.SitesTree.com

Facebook Groups/Forums to discuss (Q & A):
https://www.facebook.com/banglasalearningschool
https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/

Recession and business cycle

Recession and business cycle: "My opinion: we are in recession mode right now. It can be short or it can extend a bit longer depending on how long the pandemic lasts or how quickly the business can recover" https://www.investopedia.com/insights/recession-what-does-it-mean-investors/

Unemployment and recession: https://www.investopedia.com/ask/answers/032515/why-does-unemployment-tend-rise-during-recession.asp

***. ***. ***. ***
Note: Older short-notes from this site are posted on Medium: https://medium.com/@SayedAhmedCanada

*** . *** *** . *** . *** . ***
Sayed Ahmed

BSc. Eng. in Comp. Sc. & Eng. (BUET)
MSc. in Comp. Sc. (U of Manitoba, Canada)
MSc. in Data Science and Analytics (Ryerson University, Canada)
Linkedin: https://ca.linkedin.com/in/sayedjustetc

Blog: http://Bangla.SaLearningSchool.com, http://SitesTree.com
Training Courses: http://Training.SitesTree.com

Facebook Groups/Forums to discuss (Q & A):
https://www.facebook.com/banglasalearningschool
https://www.facebook.com/justetcsocial

Get access to courses on Big Data, Data Science, AI, Cloud, Linux, System Admin, Web Development and Misc. related. Also, create your own course to sell to others. http://sitestree.com/training/