diff --git a/02.01 基于搜索的问题求解(学生版).ipynb b/02.01 基于搜索的问题求解(学生版).ipynb index 6ec8a47..e1f9951 100644 --- a/02.01 基于搜索的问题求解(学生版).ipynb +++ b/02.01 基于搜索的问题求解(学生版).ipynb @@ -378,6 +378,17 @@ "metadata": {}, "source": [ "**答案 2**:(在此处填写你的答案。)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 扩展阅读\n", + "1. [Introduction to the A* Algorithm](https://www.redblobgames.com/pathfinding/a-star/introduction.html)\n", + "2. [游戏开发中的人工智能:A* 路径寻找算法](https://blog.csdn.net/Jurbo/article/details/75532885)\n", + "3. [A* Search Algorithm](https://www.101computing.net/a-star-search-algorithm/)\n", + "4. [A star Pathfinding A星寻路算法](https://www.bilibili.com/video/av32847834/)" ] } ], diff --git a/02.01 基于搜索的问题求解.ipynb b/02.01 基于搜索的问题求解.ipynb index ae027e1..f68ebf1 100644 --- a/02.01 基于搜索的问题求解.ipynb +++ b/02.01 基于搜索的问题求解.ipynb @@ -78,7 +78,7 @@ " \"E\": (7, 4), \"F\": (6,6),\"G\": (11,5)}\n", "\n", "# 绘制无向图\n", - "g = SearchGraph(node_list, weighted_edges_list, 'A', 'F', max_depth=3, nodes_pos = nodes_pos)\n", + "g = SearchGraph(node_list, weighted_edges_list, 'A', 'G', max_depth=3, nodes_pos = nodes_pos)\n", "g.show_graph()" ] }, @@ -242,6 +242,45 @@ "metadata": {}, "source": [ "需要强调的是,对于一个搜索问题,只要存在答案(即从初始节点到终止节点存在满足条件的一条路径),那么排除了回路的深度优先搜索和广度优先搜索均能找到一个答案,但是这个找到的答案不一定是最优的,例如距离最短。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 扩展内容\n", + "** 深度优先搜索 dfs ** 基础代码解读\n", + "\n", + "```\n", + "def iter_dfs(G, start, target):\n", + " '''\n", + " 深度优先搜索\n", + " :param G: 字典,存储每个点的相邻点\n", + " :param start: 初始点\n", + " :param target: 目标点\n", + " :return:\n", + " '''\n", + "\n", + " # 定义已访问的点的集合\n", + " S = set()\n", + " # 定义一个待访问点的列表\n", + " Q = []\n", + " # 把初始点放进列表中\n", + " Q.append(start)\n", + " while Q:\n", + " # 只要带访问的列表不为空,那么从列表中拿取最后一个元素,也就是一个点,记作 u\n", + " u = Q.pop()\n", + " # 如果当前点是目标点,则结束查找\n", + " if u == target:\n", + " break\n", + " # 如果该点已经被访问了,则跳过此点\n", + " if u in S:\n", + " continue\n", + " # 访问此点,将点加入已访问点的结合 S 中\n", + " S.add(u)\n", + " # 将点 u 相邻的点放入待访问的列表中\n", + " Q.extend(G[u])\n", + "```" ] }, { @@ -409,6 +448,18 @@ "source": [ "# 查看 dfs 的搜索过程\n", "h_graph.animation_search_tree('dfs')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 扩展阅读\n", + "1. [Introduction to the A* Algorithm](https://www.redblobgames.com/pathfinding/a-star/introduction.html)\n", + "2. [游戏开发中的人工智能:A* 路径寻找算法](https://blog.csdn.net/Jurbo/article/details/75532885)\n", + "3. [A* Search Algorithm](https://www.101computing.net/a-star-search-algorithm/)\n", + "4. [A star Pathfinding A星寻路算法](https://www.bilibili.com/video/av32847834/)\n", + " " ] } ], diff --git a/02.02 决策树(学生版).ipynb b/02.02 决策树(学生版).ipynb index 626c938..0dd0f17 100644 --- a/02.02 决策树(学生版).ipynb +++ b/02.02 决策树(学生版).ipynb @@ -922,6 +922,17 @@ "source": [ "**答案 1**:(在此处填写你的答案。)" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 扩展阅读\n", + "\n", + "1. [决策树与随机森林](https://www.bilibili.com/video/av26086646?from=search&seid=6716049859412037731)\n", + "2. [从决策树到随机森林:树型算法的原理与实现](https://www.jiqizhixin.com/articles/2017-07-31-3)\n", + "3. [sklearn 决策树](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html)" + ] } ], "metadata": { diff --git a/02.02 决策树.ipynb b/02.02 决策树.ipynb index 7a3176b..2d96567 100644 --- a/02.02 决策树.ipynb +++ b/02.02 决策树.ipynb @@ -794,6 +794,17 @@ "source": [ "ent = cal_essay_entropy(en_essay, split_by = ' ')\n" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 扩展阅读\n", + "\n", + "1. [决策树与随机森林](https://www.bilibili.com/video/av26086646?from=search&seid=6716049859412037731)\n", + "2. [从决策树到随机森林:树型算法的原理与实现](https://www.jiqizhixin.com/articles/2017-07-31-3)\n", + "3. [sklearn 决策树](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html)" + ] } ], "metadata": { diff --git a/02.03 回归分析(学生版).ipynb b/02.03 回归分析(学生版).ipynb index 0f0afd1..8c10f8a 100644 --- a/02.03 回归分析(学生版).ipynb +++ b/02.03 回归分析(学生版).ipynb @@ -491,6 +491,16 @@ "plt.plot(x_predict, z_predict, c='b')\n", "plt.show()" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 扩展阅读\n", + "\n", + "1. [线性回归白板推导](https://www.bilibili.com/video/av31989606?from=search&seid=15463936019723788543)\n", + "2. [sklearn 线性回归](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html)" + ] } ], "metadata": { diff --git a/02.03 回归分析.ipynb b/02.03 回归分析.ipynb index 78eaf07..575cc82 100644 --- a/02.03 回归分析.ipynb +++ b/02.03 回归分析.ipynb @@ -512,6 +512,16 @@ "plt.plot(x_predict, z_predict, c='b')\n", "plt.show()" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 扩展阅读\n", + "\n", + "1. [线性回归白板推导](https://www.bilibili.com/video/av31989606?from=search&seid=15463936019723788543)\n", + "2. [sklearn 线性回归](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html)" + ] } ], "metadata": { diff --git a/02.04 贝叶斯分析(学生版).ipynb b/02.04 贝叶斯分析(学生版).ipynb index ab31e33..7f6068c 100644 --- a/02.04 贝叶斯分析(学生版).ipynb +++ b/02.04 贝叶斯分析(学生版).ipynb @@ -482,6 +482,16 @@ "source": [ "**答案 1**:(在此处填写你的答案。)" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 扩展阅读\n", + "1. [一文读懂概率论学习:贝叶斯理论](https://www.jiqizhixin.com/articles/2019-11-21)\n", + "2. [朴素贝叶斯法讲解](https://www.bilibili.com/video/av57126177?from=search&seid=1588787263892359481)\n", + "3. [sklearn 贝叶斯方法](https://scikit-learn.org/stable/modules/naive_bayes.html)" + ] } ], "metadata": { diff --git a/02.04 贝叶斯分析.ipynb b/02.04 贝叶斯分析.ipynb index 954441f..0acba03 100644 --- a/02.04 贝叶斯分析.ipynb +++ b/02.04 贝叶斯分析.ipynb @@ -468,6 +468,16 @@ "imgs = get_imgs(test_images, test_labels, test_predict_BNB, 9, 4)\n", "plot_images(imgs)" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 扩展阅读\n", + "1. [一文读懂概率论学习:贝叶斯理论](https://www.jiqizhixin.com/articles/2019-11-21)\n", + "2. [朴素贝叶斯法讲解](https://www.bilibili.com/video/av57126177?from=search&seid=1588787263892359481)\n", + "3. [sklearn 贝叶斯方法](https://scikit-learn.org/stable/modules/naive_bayes.html)" + ] } ], "metadata": { diff --git a/02.05 神经网络学习(学生版).ipynb b/02.05 神经网络学习(学生版).ipynb index 70f5f2a..c29cf61 100644 --- a/02.05 神经网络学习(学生版).ipynb +++ b/02.05 神经网络学习(学生版).ipynb @@ -25,14 +25,60 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "人眼在辨识图片时,会先提取边缘特征,再识别部件,最后再得到最高层的模式。也就是说,高层的特征是低层特征的组合,从低层到高层的特征表示越来越抽象,越来越能表现语义。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "" + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "高层的特征是低层特征的组合,从低层到高层的特征表示越来越抽象,越来越能表现语义。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "人工智能中神经网络正是体现“逐层抽象、渐进学习”机制的学习模型。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "人眼在辨识图片时,会先提取边缘特征,再识别部件,最后再得到最高层的模式。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" ] }, { @@ -48,7 +94,13 @@ "source": [ "**感知机模型**:\n", "\n", - "" + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + "
" ] }, { @@ -215,7 +267,15 @@ "\n", "与感知机的不同,神经网络:\n", "+ 输入层和输出层之间存在若干隐藏层。\n", - "+ 每个隐藏层中包含若干神经元。\n" + "+ 每个隐藏层中包含若干神经元。\n", + "\n", + "神经网络流程:\n", + "
\n", + "
\n", + "
\n", + "
\n", + "
\n", + "
\n" ] }, { diff --git a/02.05 神经网络学习.ipynb b/02.05 神经网络学习.ipynb index d0e99bd..404a236 100644 --- a/02.05 神经网络学习.ipynb +++ b/02.05 神经网络学习.ipynb @@ -25,14 +25,60 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "人眼在辨识图片时,会先提取边缘特征,再识别部件,最后再得到最高层的模式。也就是说,高层的特征是低层特征的组合,从低层到高层的特征表示越来越抽象,越来越能表现语义。" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "" + "\n", + " \n", + " \n", + " \n", + " \n", + " \n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "高层的特征是低层特征的组合,从低层到高层的特征表示越来越抽象,越来越能表现语义。\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "人工智能中神经网络正是体现“逐层抽象、渐进学习”机制的学习模型。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "人眼在辨识图片时,会先提取边缘特征,再识别部件,最后再得到最高层的模式。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" ] }, { @@ -48,7 +94,13 @@ "source": [ "**感知机模型**:\n", "\n", - "" + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + "
" ] }, { @@ -217,7 +269,15 @@ "\n", "与感知机的不同,神经网络:\n", "+ 输入层和输出层之间存在若干隐藏层。\n", - "+ 每个隐藏层中包含若干神经元。\n" + "+ 每个隐藏层中包含若干神经元。\n", + "\n", + "神经网络流程:\n", + "
\n", + "
\n", + "
\n", + "
\n", + "
\n", + "
\n" ] }, { diff --git a/search.py b/search.py index e9a5f67..934479e 100644 --- a/search.py +++ b/search.py @@ -401,7 +401,7 @@ def generate_greedy_help_text(self,path): if path[-1] == self.target_node: return '抵达目标节点' + str(self.target_node) - elif len(path) == self.max_depth+1: + elif path not in self.search_scores: return '抵达最大搜索深度,未找到目标节点' base_text = '当前可选的子节点及其信息值为 \n'+ \