joyvan
6 years ago
| 200 | 200 | " </tr>\n", |
| 201 | 201 | "</tbody>\n", |
| 202 | 202 | "</table>" |
| 203 | ] | |
| 204 | }, | |
| 205 | { | |
| 206 | "cell_type": "markdown", | |
| 207 | "metadata": {}, | |
| 208 | "source": [ | |
| 209 | "### 扩展内容\n", | |
| 210 | "\n", | |
| 211 | "**1.使用 sklearn 工具包构建回归模型**" | |
| 212 | ] | |
| 213 | }, | |
| 214 | { | |
| 215 | "cell_type": "markdown", | |
| 216 | "metadata": {}, | |
| 217 | "source": [ | |
| 218 | "我们也可以使用 sklearn 工具包来解决上面的问题。" | |
| 219 | ] | |
| 220 | }, | |
| 221 | { | |
| 222 | "cell_type": "code", | |
| 223 | "execution_count": 1, | |
| 224 | "metadata": {}, | |
| 225 | "outputs": [ | |
| 226 | { | |
| 227 | "name": "stdout", | |
| 228 | "output_type": "stream", | |
| 229 | "text": [ | |
| 230 | "[[1.53438095]]\n", | |
| 231 | "[-2698.87714286]\n" | |
| 232 | ] | |
| 233 | } | |
| 234 | ], | |
| 235 | "source": [ | |
| 236 | "# 导入工具包\n", | |
| 237 | "import numpy as np\n", | |
| 238 | "from sklearn.linear_model import LinearRegression\n", | |
| 239 | "\n", | |
| 240 | "# 定义数据\n", | |
| 241 | "x = np.array([1970, 1975, 1980, 1985, 1990, 1995, 2000, 2005]).reshape(-1,1)\n", | |
| 242 | "y = np.array([325.68, 331.15, 338.69, 345.90, 354.19, 360.88, 369.48, 379.67]).reshape(-1,1)\n", | |
| 243 | "\n", | |
| 244 | "# 构建模型\n", | |
| 245 | "reg = LinearRegression()\n", | |
| 246 | "\n", | |
| 247 | "# 使用数据训练模型\n", | |
| 248 | "reg.fit(x, y)\n", | |
| 249 | "\n", | |
| 250 | "# 打印模型参数\n", | |
| 251 | "print(reg.coef_)\n", | |
| 252 | "print(reg.intercept_)" | |
| 253 | ] | |
| 254 | }, | |
| 255 | { | |
| 256 | "cell_type": "markdown", | |
| 257 | "metadata": {}, | |
| 258 | "source": [ | |
| 259 | "**2.梯度下降法**\n", | |
| 260 | "\n", | |
| 261 | "在上面的例子中,不同的参数 a 和 b 将带来不同的残差值。我们把残差值更统一的称为代价函数。\n", | |
| 262 | "\n", | |
| 263 | "我们的目标就是选择合适的参数 a 和 b 来让这个代价函数的值最小。\n", | |
| 264 | "\n", | |
| 265 | "梯度下降是一个用来求函数最小值的算法,我们可以使用梯度下降算法来求出代价函数$J(\\theta_{0}, \\theta_{1})$的最小值。 \n", | |
| 266 | "\n", | |
| 267 | "梯度下降背后的思想是:开始时我们随机选择一个参数的组合$(\\theta_{0},\\theta_{1},......,\\theta_{n})$ ,计算代价函数,然后我们寻找下一个能让代价函数值下降最多的参数组合。我们持续这么做直到抵达一个局部最小值,因为我们并没有尝试完所有的参数组合,所以不能确定我们得到的局部最小值是否便是全局最小值,选择不同的初始参数组合,可能会找到不同的局部最小值。 \n", | |
| 268 | " \n", | |
| 269 | " <img src='https://i.loli.net/2018/11/30/5c00c262c5885.png' width=500 >" | |
| 270 | ] | |
| 271 | }, | |
| 272 | { | |
| 273 | "cell_type": "markdown", | |
| 274 | "metadata": {}, | |
| 275 | "source": [ | |
| 276 | "梯度下降算法的公式为:\n", | |
| 277 | "\n", | |
| 278 | "<img src='https://i.loli.net/2018/11/30/5c00c5fe7ce53.png' width=350 >\n", | |
| 279 | " \n", | |
| 280 | "其中 J 是代价函数,$\\theta_{0},\\theta_{1}$ 是待求参数, α 是学习率,它决定了我们沿着能让代价函数下降程度最大的方向向下迈出的步子有多大。 " | |
| 281 | ] | |
| 282 | }, | |
| 283 | { | |
| 284 | "cell_type": "markdown", | |
| 285 | "metadata": {}, | |
| 286 | "source": [ | |
| 287 | "对于线性回归,我们的代价函数的曲线是一个 U 型。\n", | |
| 288 | "\n", | |
| 289 | "<img src=\"http://imgbed.momodel.cn//20200115000050.png\" width=300>\n", | |
| 290 | "\n", | |
| 291 | "也由于代价函数曲线是 U 形,所以梯度下降算法肯定会找到其全局最小值。" | |
| 292 | ] | |
| 293 | }, | |
| 294 | { | |
| 295 | "cell_type": "markdown", | |
| 296 | "metadata": {}, | |
| 297 | "source": [ | |
| 298 | "梯度下降其实用途广泛,不仅可以解决回归问题,也可以用来解决分类问题。在下图可以看到模型学习的过程。" | |
| 299 | ] | |
| 300 | }, | |
| 301 | { | |
| 302 | "cell_type": "markdown", | |
| 303 | "metadata": {}, | |
| 304 | "source": [ | |
| 305 | "<img src=\"http://imgbed.momodel.cn/panel_49_animation.gif\"/>" | |
| 203 | 306 | ] |
| 204 | 307 | }, |
| 205 | 308 | { |
| 231 | 231 | "source": [ |
| 232 | 232 | "### 扩展内容\n", |
| 233 | 233 | "\n", |
| 234 | "**梯度下降法**\n", | |
| 235 | "\n", | |
| 236 | "\n", | |
| 237 | "\n", | |
| 238 | "梯度下降是一个用来求函数最小值的算法。\n", | |
| 239 | "\n", | |
| 240 | "我们把上面的残差改一个名字,叫做代价函数。\n", | |
| 234 | "**1.使用 sklearn 工具包构建回归模型**" | |
| 235 | ] | |
| 236 | }, | |
| 237 | { | |
| 238 | "cell_type": "markdown", | |
| 239 | "metadata": {}, | |
| 240 | "source": [ | |
| 241 | "我们也可以使用 sklearn 工具包来解决上面的问题。" | |
| 242 | ] | |
| 243 | }, | |
| 244 | { | |
| 245 | "cell_type": "code", | |
| 246 | "execution_count": 1, | |
| 247 | "metadata": {}, | |
| 248 | "outputs": [ | |
| 249 | { | |
| 250 | "name": "stdout", | |
| 251 | "output_type": "stream", | |
| 252 | "text": [ | |
| 253 | "[[1.53438095]]\n", | |
| 254 | "[-2698.87714286]\n" | |
| 255 | ] | |
| 256 | } | |
| 257 | ], | |
| 258 | "source": [ | |
| 259 | "# 导入工具包\n", | |
| 260 | "import numpy as np\n", | |
| 261 | "from sklearn.linear_model import LinearRegression\n", | |
| 262 | "\n", | |
| 263 | "# 定义数据\n", | |
| 264 | "x = np.array([1970, 1975, 1980, 1985, 1990, 1995, 2000, 2005]).reshape(-1,1)\n", | |
| 265 | "y = np.array([325.68, 331.15, 338.69, 345.90, 354.19, 360.88, 369.48, 379.67]).reshape(-1,1)\n", | |
| 266 | "\n", | |
| 267 | "# 构建模型\n", | |
| 268 | "reg = LinearRegression()\n", | |
| 269 | "\n", | |
| 270 | "# 使用数据训练模型\n", | |
| 271 | "reg.fit(x, y)\n", | |
| 272 | "\n", | |
| 273 | "# 打印模型参数\n", | |
| 274 | "print(reg.coef_)\n", | |
| 275 | "print(reg.intercept_)" | |
| 276 | ] | |
| 277 | }, | |
| 278 | { | |
| 279 | "cell_type": "markdown", | |
| 280 | "metadata": {}, | |
| 281 | "source": [ | |
| 282 | "**2.梯度下降法**\n", | |
| 283 | "\n", | |
| 284 | "在上面的例子中,不同的参数 a 和 b 将带来不同的残差值。我们把残差值更统一的称为代价函数。\n", | |
| 285 | "\n", | |
| 286 | "我们的目标就是选择合适的参数 a 和 b 来让这个代价函数的值最小。\n", | |
| 287 | "\n", | |
| 288 | "梯度下降是一个用来求函数最小值的算法,我们可以使用梯度下降算法来求出代价函数$J(\\theta_{0}, \\theta_{1})$的最小值。 \n", | |
| 241 | 289 | "\n", |
| 242 | 290 | "梯度下降背后的思想是:开始时我们随机选择一个参数的组合$(\\theta_{0},\\theta_{1},......,\\theta_{n})$ ,计算代价函数,然后我们寻找下一个能让代价函数值下降最多的参数组合。我们持续这么做直到抵达一个局部最小值,因为我们并没有尝试完所有的参数组合,所以不能确定我们得到的局部最小值是否便是全局最小值,选择不同的初始参数组合,可能会找到不同的局部最小值。 \n", |
| 243 | 291 | " \n", |
| 244 | " <img src='https://i.loli.net/2018/11/30/5c00c262c5885.png' width=350 >\n", | |
| 245 | "\n", | |
| 246 | "想象一下你正站立在山的这一点上,站立在你想象的公园这座红色山上,在梯度下降算法中,我们要做的就是旋转 360 度,看看我们的周围,并问自己要在某个方向上,用小碎步尽快下山。这些小碎步需要朝什么方向?如果我们站在山坡上的这一点,你看一下周围,你会发现最佳的下山方向,你再看看周围,然后再一次想想,我应该从什么方向迈着小碎步下山?然后你按照自己的判断又迈出一步,重复上面的步骤,从这个新的点,你环顾四周,并决定从什么方向将会最快下山,然后又迈进了一小步,并依此类推,直到你接近局部最低点的位置。\n" | |
| 247 | ] | |
| 248 | }, | |
| 249 | { | |
| 250 | "cell_type": "markdown", | |
| 251 | "metadata": {}, | |
| 252 | "source": [ | |
| 292 | " <img src='https://i.loli.net/2018/11/30/5c00c262c5885.png' width=500 >" | |
| 293 | ] | |
| 294 | }, | |
| 295 | { | |
| 296 | "cell_type": "markdown", | |
| 297 | "metadata": {}, | |
| 298 | "source": [ | |
| 299 | "梯度下降算法的公式为:\n", | |
| 300 | "\n", | |
| 253 | 301 | "<img src='https://i.loli.net/2018/11/30/5c00c5fe7ce53.png' width=350 >\n", |
| 254 | 302 | " \n", |
| 255 | "其中 α 是学习率,它决定了我们沿着能让代价函数下降程度最大的方向向下迈出的步子有多大。 " | |
| 303 | "其中 J 是代价函数,$\\theta_{0},\\theta_{1}$ 是待求参数, α 是学习率,它决定了我们沿着能让代价函数下降程度最大的方向向下迈出的步子有多大。 " | |
| 304 | ] | |
| 305 | }, | |
| 306 | { | |
| 307 | "cell_type": "markdown", | |
| 308 | "metadata": {}, | |
| 309 | "source": [ | |
| 310 | "对于线性回归,我们的代价函数的曲线是一个 U 型。\n", | |
| 311 | "\n", | |
| 312 | "<img src=\"http://imgbed.momodel.cn//20200115000050.png\" width=300>\n", | |
| 313 | "\n", | |
| 314 | "也由于代价函数曲线是 U 形,所以梯度下降算法肯定会找到其全局最小值。" | |
| 315 | ] | |
| 316 | }, | |
| 317 | { | |
| 318 | "cell_type": "markdown", | |
| 319 | "metadata": {}, | |
| 320 | "source": [ | |
| 321 | "梯度下降其实用途广泛,不仅可以解决回归问题,也可以用来解决分类问题。在下图可以看到模型学习的过程。" | |
| 322 | ] | |
| 323 | }, | |
| 324 | { | |
| 325 | "cell_type": "markdown", | |
| 326 | "metadata": {}, | |
| 327 | "source": [ | |
| 328 | "<img src=\"http://imgbed.momodel.cn/panel_49_animation.gif\"/>" | |
| 256 | 329 | ] |
| 257 | 330 | }, |
| 258 | 331 | { |
| 104 | 104 | "# 当邮件中出现 “红包” ,其为正常邮件的后验概率\n", |
| 105 | 105 | "P_normal_hongbao = P_normal * P_hongbao_normal / P_hongbao\n", |
| 106 | 106 | "print(\"当邮件中出现 “红包” ,其为正常邮件的后验概率为 \" + str(P_normal_hongbao))" |
| 107 | ] | |
| 108 | }, | |
| 109 | { | |
| 110 | "cell_type": "markdown", | |
| 111 | "metadata": {}, | |
| 112 | "source": [ | |
| 113 | "### 扩展内容\n", | |
| 114 | "\n", | |
| 115 | "**化验结果为阳性就代表你真的患病了吗?**\n", | |
| 116 | "\n", | |
| 117 | "某同学 A 身体不舒服,去医院作了验血检查,看他是否得了 X 疾病,检查结果居然为阳性,他吓了一跳,赶紧上网查询。他看到网上有资料说,实验总是有误差的,这种实验有“百分之一的假阳性率和百分之一的假阴性率”。也就是说,在确实得了 X 疾病的人里面, 会有 1% 的人是假阴性,99%的人是真阳性, 也就是会有 。而没得病的人去做检查,有 1% 的人是假阳性,99% 的人是真阴性。 于是,他认为,既然误检的概率这么低,那么他确实患病的概率应该是非常高的。\n", | |
| 118 | "\n", | |
| 119 | "可是,医生却告诉他,他被感染的概率只有 0.09 左右。这是怎么回事呢?\n", | |
| 120 | "\n", | |
| 121 | "医生说:“不用害怕。99% 是测试的准确性,不是你得病的概率。你忘了考虑一件事:这种疾病的患病比例是很小的,1000个人中只有一个人有这种病。”\n", | |
| 122 | "\n", | |
| 123 | "医生的计算方法是这样的:因为测试的误报率是 1%,1000个人将有 10 个被报为“假阳性”,而根据 X 病在人口中的比例(1/1000=0.1%),也就是说 1000 个人里真阳性只有1个。所以,大约 11 个测试为阳性的人中才有一个是真阳性(有病)的人,因此,同学被感染的几率是大约1/11,即0.09(9%)。" | |
| 124 | ] | |
| 125 | }, | |
| 126 | { | |
| 127 | "cell_type": "markdown", | |
| 128 | "metadata": {}, | |
| 129 | "source": [ | |
| 130 | "$A$ : 普通人患 X 病\n", | |
| 131 | "\n", | |
| 132 | "$B$ : 化验结果为阳性\n", | |
| 133 | "\n", | |
| 134 | "$P(A)$ 普通人患 X 的病概率 1/1000\n", | |
| 135 | "\n", | |
| 136 | "$P(B)$ 化验结果为阳性的总可能性 \n", | |
| 137 | "\n", | |
| 138 | "$P(A|B)$:检测结果为阳性时,一个人患 X 病的概率\n", | |
| 139 | "\n", | |
| 140 | "$P(B|A)$:一个人患 X 病,其检测结果为阳性的概率, 99%\n", | |
| 141 | "\n", | |
| 142 | "根据**贝叶斯公式**:\n", | |
| 143 | "\n", | |
| 144 | "$$\n", | |
| 145 | "\\begin{align}\n", | |
| 146 | "&P(A|B)=\\frac{P(B|A){P(A)}}{P(B)}\\\\\n", | |
| 147 | "=&\\frac{99\\%*(1/1000)}{99\\%*(1/1000) + 1\\%*(999/1000)}\\\\\n", | |
| 148 | "=&\\frac{99}{1098}\\\\\n", | |
| 149 | "≈ & 9\\%\n", | |
| 150 | "\\end{align}\n", | |
| 151 | "$$\n" | |
| 107 | 152 | ] |
| 108 | 153 | }, |
| 109 | 154 | { |
| 104 | 104 | "# 当邮件中出现 “红包” ,其为正常邮件的后验概率\n", |
| 105 | 105 | "P_normal_hongbao = P_normal * P_hongbao_normal / P_hongbao\n", |
| 106 | 106 | "print(\"当邮件中出现 “红包” ,其为正常邮件的后验概率为 \" + str(P_normal_hongbao))" |
| 107 | ] | |
| 108 | }, | |
| 109 | { | |
| 110 | "cell_type": "markdown", | |
| 111 | "metadata": {}, | |
| 112 | "source": [ | |
| 113 | "### 扩展内容\n", | |
| 114 | "\n", | |
| 115 | "**化验结果为阳性就代表你真的患病了吗?**\n", | |
| 116 | "\n", | |
| 117 | "某同学 A 身体不舒服,去医院作了验血检查,看他是否得了 X 疾病,检查结果居然为阳性,他吓了一跳,赶紧上网查询。他看到网上有资料说,实验总是有误差的,这种实验有“百分之一的假阳性率和百分之一的假阴性率”。也就是说,在确实得了 X 疾病的人里面, 会有 1% 的人是假阴性,99%的人是真阳性, 也就是会有 。而没得病的人去做检查,有 1% 的人是假阳性,99% 的人是真阴性。 于是,他认为,既然误检的概率这么低,那么他确实患病的概率应该是非常高的。\n", | |
| 118 | "\n", | |
| 119 | "可是,医生却告诉他,他被感染的概率只有 0.09 左右。这是怎么回事呢?\n", | |
| 120 | "\n", | |
| 121 | "医生说:“不用害怕。99% 是测试的准确性,不是你得病的概率。你忘了考虑一件事:这种疾病的患病比例是很小的,1000个人中只有一个人有这种病。”\n", | |
| 122 | "\n", | |
| 123 | "医生的计算方法是这样的:因为测试的误报率是 1%,1000个人将有 10 个被报为“假阳性”,而根据 X 病在人口中的比例(1/1000=0.1%),也就是说 1000 个人里真阳性只有1个。所以,大约 11 个测试为阳性的人中才有一个是真阳性(有病)的人,因此,同学被感染的几率是大约1/11,即0.09(9%)。" | |
| 124 | ] | |
| 125 | }, | |
| 126 | { | |
| 127 | "cell_type": "markdown", | |
| 128 | "metadata": {}, | |
| 129 | "source": [ | |
| 130 | "$A$ : 普通人患 X 病\n", | |
| 131 | "\n", | |
| 132 | "$B$ : 化验结果为阳性\n", | |
| 133 | "\n", | |
| 134 | "$P(A)$ 普通人患 X 的病概率 1/1000\n", | |
| 135 | "\n", | |
| 136 | "$P(B)$ 化验结果为阳性的总可能性 \n", | |
| 137 | "\n", | |
| 138 | "$P(A|B)$:检测结果为阳性时,一个人患 X 病的概率\n", | |
| 139 | "\n", | |
| 140 | "$P(B|A)$:一个人患 X 病,其检测结果为阳性的概率, 99%\n", | |
| 141 | "\n", | |
| 142 | "根据**贝叶斯公式**:\n", | |
| 143 | "\n", | |
| 144 | "$$\n", | |
| 145 | "\\begin{align}\n", | |
| 146 | "&P(A|B)=\\frac{P(B|A){P(A)}}{P(B)}\\\\\n", | |
| 147 | "=&\\frac{99\\%*(1/1000)}{99\\%*(1/1000) + 1\\%*(999/1000)}\\\\\n", | |
| 148 | "=&\\frac{99}{1098}\\\\\n", | |
| 149 | "≈ & 9\\%\n", | |
| 150 | "\\end{align}\n", | |
| 151 | "$$\n" | |
| 107 | 152 | ] |
| 108 | 153 | }, |
| 109 | 154 | { |
| 269 | 269 | "+ 每个隐藏层中包含若干神经元。\n", |
| 270 | 270 | "\n", |
| 271 | 271 | "神经网络流程:\n", |
| 272 | "<center><video src=\"https://files.momodel.cn/media1.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 273 | "<center><video src=\"https://files.momodel.cn/media2.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 274 | "<center><video src=\"https://files.momodel.cn/media3.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 275 | "<center><video src=\"https://files.momodel.cn/media4.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 276 | "<center><video src=\"https://files.momodel.cn/media5.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 277 | "<center><video src=\"https://files.momodel.cn/media6.mp4\" controls=\"controls\" width=800px></center>\n" | |
| 272 | "<center><video src=\"./nn_media1.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 273 | "<center><video src=\"./nn_media2.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 274 | "<center><video src=\"./nn_media3.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 275 | "<center><video src=\"./nn_media4.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 276 | "<center><video src=\"./nn_media5.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 277 | "<center><video src=\"./nn_media6.mp4\" controls=\"controls\" width=800px></center>\n" | |
| 278 | 278 | ] |
| 279 | 279 | }, |
| 280 | 280 | { |
| 545 | 545 | "source": [ |
| 546 | 546 | "**答案 1**:(在此处填写你的答案。)" |
| 547 | 547 | ] |
| 548 | }, | |
| 549 | { | |
| 550 | "cell_type": "markdown", | |
| 551 | "metadata": {}, | |
| 552 | "source": [ | |
| 553 | "## 扩展阅读\n", | |
| 554 | "1. [利用starGan算法改变人物的面部特征](https://momodel.cn/explore/5c0cc4591afd945c5177fb51?type=app)\n", | |
| 555 | "2. [利用 pix2pix 将你的草图变成图片](https://momodel.cn/explore/5c0cb5df1afd945819064752?type=app)\n", | |
| 556 | "3. [自动生成图片描述](https://momodel.cn/explore/5ba33f578fe30b412042ac08?&type=app&tab=1)" | |
| 557 | ] | |
| 558 | }, | |
| 559 | { | |
| 560 | "cell_type": "code", | |
| 561 | "execution_count": null, | |
| 562 | "metadata": {}, | |
| 563 | "outputs": [], | |
| 564 | "source": [] | |
| 548 | 565 | } |
| 549 | 566 | ], |
| 550 | 567 | "metadata": { |
| 271 | 271 | "+ 每个隐藏层中包含若干神经元。\n", |
| 272 | 272 | "\n", |
| 273 | 273 | "神经网络流程:\n", |
| 274 | "<center><video src=\"https://files.momodel.cn/media1.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 275 | "<center><video src=\"https://files.momodel.cn/media2.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 276 | "<center><video src=\"https://files.momodel.cn/media3.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 277 | "<center><video src=\"https://files.momodel.cn/media4.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 278 | "<center><video src=\"https://files.momodel.cn/media5.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 279 | "<center><video src=\"https://files.momodel.cn/media6.mp4\" controls=\"controls\" width=800px></center>\n" | |
| 274 | "<center><video src=\"./nn_media1.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 275 | "<center><video src=\"./nn_media2.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 276 | "<center><video src=\"./nn_media3.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 277 | "<center><video src=\"./nn_media4.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 278 | "<center><video src=\"./nn_media5.mp4\" controls=\"controls\" width=800px></center>\n", | |
| 279 | "<center><video src=\"./nn_media6.mp4\" controls=\"controls\" width=800px></center>" | |
| 280 | 280 | ] |
| 281 | 281 | }, |
| 282 | 282 | { |
| 564 | 564 | "print('原始模型\\n其各类别预测概率:%s,预测值: %s,真实值:%s\\n' % (predict_results,np.argmax(predict_results),np.argmax(y_test[0])))\n", |
| 565 | 565 | "print('只有一个隐藏层的模型\\n其各类别各类别预测概率:%s,预测值: %s,真实值:%s\\n' % (predict_results1,np.argmax(predict_results1),np.argmax(y_test[0])))\n", |
| 566 | 566 | "print('隐藏神经元数量更改后的模型\\n其各类别预测概率:%s,预测值: %s,真实值:%s' % (predict_results2,np.argmax(predict_results2),np.argmax(y_test[0])))" |
| 567 | ] | |
| 568 | }, | |
| 569 | { | |
| 570 | "cell_type": "markdown", | |
| 571 | "metadata": {}, | |
| 572 | "source": [ | |
| 573 | "## 扩展阅读\n", | |
| 574 | "1. [利用starGan算法改变人物的面部特征](https://momodel.cn/explore/5c0cc4591afd945c5177fb51?type=app)\n", | |
| 575 | "2. [利用 pix2pix 将你的草图变成图片](https://momodel.cn/explore/5c0cb5df1afd945819064752?type=app)\n", | |
| 576 | "3. [自动生成图片描述](https://momodel.cn/explore/5ba33f578fe30b412042ac08?&type=app&tab=1)" | |
| 567 | 577 | ] |
| 568 | 578 | }, |
| 569 | 579 | { |
Binary diff not shown