Novo curso! Todo programador deveria aprender IA generativa!
0

# Machine Learning

Whatās the best language to learn if I want to get involved with machine learning?

15th Dec 2017, 11:05 AM
Marcus
13 Respostas
+ 13
Machine learning algorithms are best implemented in Python, R, MATLAB and Julia. Of those, R and MATLAB are pretty hermetic, while Julia is relatively new, but growing strong. I would (and did ;) focus on Python š
15th Dec 2017, 11:30 AM
Kuba SiekierzyÅski
+ 7
PHP VBScript SMX ā dedicated to web pages Tcl ā server-side in NaviServer and an essential component in electronics industry systems WebDNA ā dedicated to database-driven websites AngelScript Ch EEL Io Julia Lua MiniD Python Ruby (via mruby) Squirrel Tcl
15th Dec 2017, 11:09 AM
Scooby
+ 5
php is the best i think
15th Dec 2017, 11:11 AM
Scooby
+ 4
Python is the current favourite.
29th May 2019, 11:45 AM
Sonic
+ 2
Letās say we have a feature matrix X. Drag and drop in order to correctly define X_train and X_test for the second fold. Answer: kf = KFold(n_splits=5, shuffle=True) splits = list(kf.split(X)) a, b = splits[1] X_train = X[a] X_test = X[b]
21st Nov 2020, 2:21 AM
Jason Chew
+ 1
Which of the following could be the output of this code assuming X has 3 datapoints? kf = KFold(n_splits=3, shuffle=True) splits = list(kf.split(X)) print(splits[0]) Answer: ([0, 2], [1]) ([0, 1], [2])
21st Nov 2020, 2:16 AM
Jason Chew
+ 1
Complete the code to do a k-fold cross validation where k=5 and calculate the accuracy. X is the feature matrix and y is the target array. scores = [ ] kf = KFold(n_splits=5, shuffle=True) for train_index, test_index in kf.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] model = LogisticRegression() model.fit(X_train, y_train) scores.append(model.score(X_test, y_test)) print(np.mean(scores))
21st Nov 2020, 2:29 AM
Jason Chew
0
Complete the code to create a fourth feature matrix that has just the Pclass and Sex features and uses the score_model function to print the scores. Assume weāve defined y to be the target values and kf to be the KFold object. X4 = df[['Pclass', 'male']].values score_model(X4, y, kf)
21st Nov 2020, 2:24 AM
Jason Chew
0
Select all that apply for the version of Decision Trees we are using. Select all that apply A feature can only be used once Every leaf node has a prediction for the target value Each internal node has exactly two children Every feature must be used Every path to a leaf node will be the same length Answer: Every leaf node has a prediction for the target value Each internal node has exactly two children
21st Nov 2020, 2:36 AM
Jason Chew
0
scores = [ ] kf = KFold(n_splits=5, shuffle=True) for train_index, test_index in kf.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] model = LogisticRegression() model.fit(X_train, y_train) scores.append(model.score(X_test, y_test)) print(np.mean(scores))
30th May 2021, 8:43 AM
Amin Nouri
0
scores = [ ] kf = KFold(n_splits=5, shuffle=True) for train_index, test_index in kf.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] model = LogisticRegression() model. fit (X_train, y_train) scores. append (model.score(X_test, y_test)) print(np. mean (scores))
8th Dec 2021, 3:19 AM
Nabila Muftia Ma'ruf Kartono
0
many thankssss
10th May 2022, 5:19 PM
Mihai Justin Chirasnel
- 1
Letās say we have a feature matrix X. Drag and drop in order to correctly define X_train and X_test for the second fold. kf = KFold(n_splits=5, shuffle=True) splits = list(kf.split(X)) a, b = splits[] X_train = X[] X_test = X[] a b 1 0 2 ab Unlock
15th Jul 2020, 6:30 AM
Godavarthi Sai Anirudh