Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Neural network (machine learning)
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Network design == Using artificial neural networks requires an understanding of their characteristics. * Choice of model: This depends on the data representation and the application. Model parameters include the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc. ). Overly complex models learn slowly. * [[Machine learning|Learning algorithm]]: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters<ref>{{cite journal |last1=Probst |first1=Philipp |last2=Boulesteix |first2=Anne-Laure |last3=Bischl |first3=Bernd |title=Tunability: Importance of Hyperparameters of Machine Learning Algorithms |journal=J. Mach. Learn. Res. |date=26 February 2018 |volume=20 |page=53:1β53:32 |s2cid=88515435 }}</ref> for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation. * [[Robustness]]: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust. [[Neural architecture search]] (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network.<ref>{{cite arXiv|last1=Zoph|first1=Barret|last2=Le|first2=Quoc V.|date=4 November 2016|title=Neural Architecture Search with Reinforcement Learning|eprint=1611.01578|class=cs.LG}}</ref> Available systems include [[Automated machine learning|AutoML]] and AutoKeras.<ref>{{cite journal |author1=Haifeng Jin |author2=Qingquan Song |author3=Xia Hu |title=Auto-keras: An efficient neural architecture search system |journal=Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining |publisher=ACM |date=2019 |arxiv=1806.10282 |url=https://autokeras.com/ |via=autokeras.com |access-date=21 August 2019 |archive-date=21 August 2019 |archive-url=https://web.archive.org/web/20190821163310/https://autokeras.com/ |url-status=live }}</ref> [[Scikit-learn|scikit-learn library]] provides functions to help with building a deep network from scratch. We can then implement a deep network with [[TensorFlow]] or [[Keras]]. Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc.<ref name="abs1502.02127">{{cite arXiv|eprint=1502.02127|last1=Claesen|first1=Marc|last2=De Moor|first2=Bart |title=Hyperparameter Search in Machine Learning |date=2015|class=cs.LG }} {{bibcode|2015arXiv150202127C}}</ref> {{citation needed span|date=July 2023|The [[Python (programming language)|Python]] code snippet provides an overview of the training function, which uses the training dataset, number of hidden layer units, learning rate, and number of iterations as parameters:<syntaxhighlight lang="python3" line="1"> def train(X, y, n_hidden, learning_rate, n_iter): m, n_input = X.shape # 1. random initialize weights and biases w1 = np.random.randn(n_input, n_hidden) b1 = np.zeros((1, n_hidden)) w2 = np.random.randn(n_hidden, 1) b2 = np.zeros((1, 1)) # 2. in each iteration, feed all layers with the latest weights and biases for i in range(n_iter + 1): z2 = np.dot(X, w1) + b1 a2 = sigmoid(z2) z3 = np.dot(a2, w2) + b2 a3 = z3 dz3 = a3 - y dw2 = np.dot(a2.T, dz3) db2 = np.sum(dz3, axis=0, keepdims=True) dz2 = np.dot(dz3, w2.T) * sigmoid_derivative(z2) dw1 = np.dot(X.Y, dz2) db1 = np.sum(dz2, axis=0) # 3. update weights and biases with gradients w1 -= learning_rate * dw1 / m w2 -= learning_rate * dw2 / m b1 -= learning_rate * db1 / m b2 -= learning_rate * db2 / m if i % 1000 == 0: print("Epoch", i, "loss: ", np.mean(np.square(dz3))) model = {"w1": w1, "b1": b1, "w2": w2, "b2": b2} return model </syntaxhighlight>}}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Neural network (machine learning)
(section)
Add topic