Newer
Older
{
"cells": [
{
"cell_type": "markdown",
"id": "1bba0128",
"metadata": {},
"source": [
"# Supervised learning with PyTorch\n",
"\n",
"This is an example of how to build and optimize neural networks with PyTorch. PyTorch and Tensorflow offer a handy mechanism to provide automatic differentiation, using the chain rule in Calculus to calculate the derivative of a function very fast and with GPU support.\n",
"\n",
"Our dataset will consist of images of handwritten digits and the task shall be to classify those handwritten digits in the classes {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}.\n",
"\n",
"If this was a regression problem, we would often try to minimise the mean-squared-error between the output of the neural network and the correct prediction. As we saw in the presentation, this assumes that the underlying probability distribution of the prediction is a Gaussian, which is certainly not true for the distribution of digit classes: for one, the digit classes are discrete and Gaussians are only defined for continuous outputs. The most general probability distribution for a choice of 10 classes is a Categorical distribution (https://en.wikipedia.org/wiki/Categorical_distribution), which is simply a discrete distribution with a given probability value for each class. How can we then sculpt a function that maps the input image to a given class?\n",
"\n",
"Suppose the neural network provides an output $f_k(x)$ in the form of a list of probabilities, informing us of the probability that a given image belongs to a certain class $k$. If we know that a given input image x belongs to class C, then the true probability t for this image x to belong to each class is zero for classes that differ from C and 1 for the class C. The network's objective will be to output such probabilities, so that only the i-th component of the output is 1 if the input belongs to class i. The presentation shows how the Bayes' rule leads us naturally to minimize the cross entropy between the target probabilities and the predicted probabilities: $- \\sum_k t_k \\log f_k(x)$. One can gain intuition on this by reading more on the Information Theory concept of cross-entropy and how it relates to Mutual Information: minimizing the mutual information between the labels distribution and the predicted one moves them closer together: https://en.wikipedia.org/wiki/Cross_entropy\n",
"\n",
"The neural network will therefore model a parametrized function that maps the input image pixels into a vector with 10 components, which refer to the probability that the image correspond to that digit.\n"
]
},
{
"cell_type": "code",
Danilo Ferreira de Lima
committed
"execution_count": 1,
Danilo Ferreira de Lima
committed
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: torchvision in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (0.11.2)\n",
"Requirement already satisfied: torch in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (1.10.1)\n",
"Requirement already satisfied: pandas in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (1.1.5)\n",
"Requirement already satisfied: numpy in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (1.19.2)\n",
"Requirement already satisfied: matplotlib in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (3.3.4)\n",
"Requirement already satisfied: pillow!=8.3.0,>=5.3.0 in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from torchvision) (8.3.1)\n",
"Requirement already satisfied: typing_extensions in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from torch) (3.10.0.2)\n",
"Requirement already satisfied: dataclasses in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from torch) (0.8)\n",
"Requirement already satisfied: python-dateutil>=2.7.3 in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from pandas) (2.8.2)\n",
"Requirement already satisfied: pytz>=2017.2 in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from pandas) (2021.3)\n",
"Requirement already satisfied: kiwisolver>=1.0.1 in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from matplotlib) (1.3.1)\n",
"Requirement already satisfied: cycler>=0.10 in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from matplotlib) (0.11.0)\n",
"Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from matplotlib) (3.0.4)\n",
"Requirement already satisfied: six>=1.5 in /home/daniloefl/miniconda3/envs/mltut/lib/python3.6/site-packages (from python-dateutil>=2.7.3->pandas) (1.16.0)\n"
]
}
],
"source": [
"!pip install torchvision torch pandas numpy matplotlib"
]
},
{
"cell_type": "code",
Danilo Ferreira de Lima
committed
"execution_count": 2,
"id": "23feddde",
"metadata": {},
"outputs": [],
"source": [
Danilo Ferreira de Lima
committed
"%matplotlib notebook\n",
"\n",
"# import standard PyTorch modules\n",
"import torch\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"\n",
"# import torchvision module to handle image manipulation\n",
"import torchvision\n",
"import torchvision.transforms as transforms\n",
"\n",
"import pandas as pd\n",
"import numpy as np\n",
Danilo Ferreira de Lima
committed
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"id": "48433f6f",
"metadata": {},
"source": [
"PyTorch allows you to create a class that outputs a single data entry and use that to feed input to your neural network. An example of how you would write such a class is given below, but for this exercise we shall use something ready-made which loads the standard MNIST handwritten digits dataset, just to simplify things.\n",
"\n",
"If you want to load a different dataset (for example your own data!), feel free to copy and modify the example Dataset class below."
]
},
{
"cell_type": "code",
Danilo Ferreira de Lima
committed
"execution_count": 3,
"id": "30205402",
"metadata": {},
"outputs": [],
"source": [
"class MyDataset(object):\n",
" def __init__(self):\n",
" pass\n",
" def __len__(self):\n",
" return 10 # these are how many samples I have\n",
" def __getitem__(self, idx):\n",
" # give me item with index idx\n",
" # read this from some file, but for the purposes of this example, generate a random image and label\n",
" my_image = np.random.randn(10,10, 1)\n",
" my_label = np.array(np.random.randint(10))\n",
" my_image = torch.from_numpy(my_image)\n",
" my_label = torch.from_numpy(my_label)\n",
" return {\"data\": my_image, \"label\": my_label}\n"
]
},
{
"cell_type": "code",
Danilo Ferreira de Lima
committed
"execution_count": 4,
"id": "cc0b0774",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"10\n"
]
}
],
"source": [
"my_dataset = MyDataset()\n",
"print(len(my_dataset))"
]
},
{
"cell_type": "code",
Danilo Ferreira de Lima
committed
"execution_count": 5,
"id": "6dccfac6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
Danilo Ferreira de Lima
committed
"{'data': tensor([[[-2.2541e-01],\n",
" [-8.7492e-01],\n",
" [ 2.2757e-01],\n",
" [ 3.6083e-01],\n",
" [-1.0728e+00],\n",
" [-1.9445e+00],\n",
" [ 1.2723e+00],\n",
" [-4.6512e-01],\n",
" [ 7.8006e-01],\n",
" [-2.8025e+00]],\n",
Danilo Ferreira de Lima
committed
" [[ 2.5337e-01],\n",
" [-3.9780e-01],\n",
" [-1.5573e+00],\n",
" [-6.1211e-01],\n",
" [-1.7922e+00],\n",
" [ 1.4484e+00],\n",
" [-3.1899e-01],\n",
" [ 1.4920e-03],\n",
" [-1.3394e+00],\n",
" [ 2.9670e-01]],\n",
Danilo Ferreira de Lima
committed
" [[-1.1016e-01],\n",
" [ 1.6733e+00],\n",
" [ 2.7996e-01],\n",
" [-3.7264e-01],\n",
" [-1.4377e+00],\n",
" [ 1.7887e+00],\n",
" [ 1.6038e-01],\n",
" [ 7.4539e-01],\n",
" [ 1.9320e+00],\n",
" [ 5.8520e-01]],\n",
Danilo Ferreira de Lima
committed
" [[ 1.8378e-03],\n",
" [ 7.1617e-01],\n",
" [ 1.9320e-01],\n",
" [-9.8246e-01],\n",
" [ 1.0930e+00],\n",
" [ 1.4183e+00],\n",
" [ 1.9863e-01],\n",
" [-1.1845e-01],\n",
" [ 1.3640e+00],\n",
" [ 8.7122e-01]],\n",
Danilo Ferreira de Lima
committed
" [[ 1.6613e+00],\n",
" [ 1.2155e-01],\n",
" [ 3.4827e-01],\n",
" [ 9.9064e-01],\n",
" [-9.4938e-01],\n",
" [ 4.7694e-01],\n",
" [ 5.2231e-01],\n",
" [ 5.8428e-01],\n",
" [ 4.5853e-02],\n",
" [-2.5305e+00]],\n",
Danilo Ferreira de Lima
committed
" [[-8.4262e-01],\n",
" [ 9.2893e-01],\n",
" [ 1.0124e+00],\n",
" [-1.6017e+00],\n",
" [-1.4961e-01],\n",
" [ 2.2560e+00],\n",
" [ 2.0985e-01],\n",
" [ 7.7839e-01],\n",
" [ 1.5581e+00],\n",
" [-2.3309e+00]],\n",
Danilo Ferreira de Lima
committed
" [[ 1.5367e+00],\n",
" [-3.7147e-01],\n",
" [-2.3096e-01],\n",
" [ 1.1904e+00],\n",
" [ 1.9962e+00],\n",
" [-1.0334e-01],\n",
" [-6.0397e-01],\n",
" [ 1.0851e+00],\n",
" [-2.3377e-01],\n",
" [-7.0282e-01]],\n",
Danilo Ferreira de Lima
committed
" [[ 1.7610e+00],\n",
" [-1.3054e+00],\n",
" [ 1.6490e+00],\n",
" [-3.9569e-01],\n",
" [-1.5368e+00],\n",
" [-1.0189e+00],\n",
" [-1.9441e-02],\n",
" [ 9.1458e-01],\n",
" [ 1.2236e+00],\n",
" [ 8.6759e-01]],\n",
Danilo Ferreira de Lima
committed
" [[ 4.9696e-01],\n",
" [-1.0176e+00],\n",
" [ 2.7300e+00],\n",
" [-1.1193e+00],\n",
" [ 1.5149e+00],\n",
" [ 2.2174e+00],\n",
" [ 4.5604e-01],\n",
" [-1.0018e+00],\n",
" [ 6.1021e-01],\n",
" [-5.1552e-01]],\n",
Danilo Ferreira de Lima
committed
" [[ 2.3127e+00],\n",
" [-5.9054e-01],\n",
" [-1.1682e+00],\n",
" [-2.8864e-02],\n",
" [-1.8656e+00],\n",
" [-1.4874e+00],\n",
" [-1.7110e+00],\n",
" [ 8.5748e-01],\n",
" [-1.2645e+00],\n",
" [-3.9108e-01]]], dtype=torch.float64), 'label': tensor(3)}\n"
]
}
],
"source": [
"print(my_dataset[1])"
]
},
{
"cell_type": "markdown",
"id": "6f1c9da9",
"metadata": {},
"source": [
"But let's keep things simple and just focus on the actual neural network, using a standard class to load a standard dataset."
]
},
{
"cell_type": "code",
Danilo Ferreira de Lima
committed
"execution_count": 6,
Danilo Ferreira de Lima
committed
"outputs": [],
"source": [
"# Use standard MNIST dataset\n",
"my_dataset = torchvision.datasets.MNIST(\n",
" root = './data/MNIST',\n",
" train = True,\n",
" download = True,\n",
" transform = transforms.Compose([\n",
" transforms.ToTensor() \n",
" ])\n",
")"
]
},
{
"cell_type": "markdown",
"id": "527089bd",
"metadata": {},
"source": [
"Plot some of the data with their labels:"
]
},
{
"cell_type": "code",
Danilo Ferreira de Lima
committed
"execution_count": 7,
"id": "067b8105",
"metadata": {},
"outputs": [
{
"data": {
Danilo Ferreira de Lima
committed
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
"application/javascript": [
"/* Put everything inside the global mpl namespace */\n",
"/* global mpl */\n",
"window.mpl = {};\n",
"\n",
"mpl.get_websocket_type = function () {\n",
" if (typeof WebSocket !== 'undefined') {\n",
" return WebSocket;\n",
" } else if (typeof MozWebSocket !== 'undefined') {\n",
" return MozWebSocket;\n",
" } else {\n",
" alert(\n",
" 'Your browser does not have WebSocket support. ' +\n",
" 'Please try Chrome, Safari or Firefox ≥ 6. ' +\n",
" 'Firefox 4 and 5 are also supported but you ' +\n",
" 'have to enable WebSockets in about:config.'\n",
" );\n",
" }\n",
"};\n",
"\n",
"mpl.figure = function (figure_id, websocket, ondownload, parent_element) {\n",
" this.id = figure_id;\n",
"\n",
" this.ws = websocket;\n",
"\n",
" this.supports_binary = this.ws.binaryType !== undefined;\n",
"\n",
" if (!this.supports_binary) {\n",
" var warnings = document.getElementById('mpl-warnings');\n",
" if (warnings) {\n",
" warnings.style.display = 'block';\n",
" warnings.textContent =\n",
" 'This browser does not support binary websocket messages. ' +\n",
" 'Performance may be slow.';\n",
" }\n",
" }\n",
"\n",
" this.imageObj = new Image();\n",
"\n",
" this.context = undefined;\n",
" this.message = undefined;\n",
" this.canvas = undefined;\n",
" this.rubberband_canvas = undefined;\n",
" this.rubberband_context = undefined;\n",
" this.format_dropdown = undefined;\n",
"\n",
" this.image_mode = 'full';\n",
"\n",
" this.root = document.createElement('div');\n",
" this.root.setAttribute('style', 'display: inline-block');\n",
" this._root_extra_style(this.root);\n",
"\n",
" parent_element.appendChild(this.root);\n",
"\n",
" this._init_header(this);\n",
" this._init_canvas(this);\n",
" this._init_toolbar(this);\n",
"\n",
" var fig = this;\n",
"\n",
" this.waiting = false;\n",
"\n",
" this.ws.onopen = function () {\n",
" fig.send_message('supports_binary', { value: fig.supports_binary });\n",
" fig.send_message('send_image_mode', {});\n",
" if (fig.ratio !== 1) {\n",
" fig.send_message('set_dpi_ratio', { dpi_ratio: fig.ratio });\n",
" }\n",
" fig.send_message('refresh', {});\n",
" };\n",
"\n",
" this.imageObj.onload = function () {\n",
" if (fig.image_mode === 'full') {\n",
" // Full images could contain transparency (where diff images\n",
" // almost always do), so we need to clear the canvas so that\n",
" // there is no ghosting.\n",
" fig.context.clearRect(0, 0, fig.canvas.width, fig.canvas.height);\n",
" }\n",
" fig.context.drawImage(fig.imageObj, 0, 0);\n",
" };\n",
"\n",
" this.imageObj.onunload = function () {\n",
" fig.ws.close();\n",
" };\n",
"\n",
" this.ws.onmessage = this._make_on_message_function(this);\n",
"\n",
" this.ondownload = ondownload;\n",
"};\n",
"\n",
"mpl.figure.prototype._init_header = function () {\n",
" var titlebar = document.createElement('div');\n",
" titlebar.classList =\n",
" 'ui-dialog-titlebar ui-widget-header ui-corner-all ui-helper-clearfix';\n",
" var titletext = document.createElement('div');\n",
" titletext.classList = 'ui-dialog-title';\n",
" titletext.setAttribute(\n",
" 'style',\n",
" 'width: 100%; text-align: center; padding: 3px;'\n",
" );\n",
" titlebar.appendChild(titletext);\n",
" this.root.appendChild(titlebar);\n",
" this.header = titletext;\n",
"};\n",
"\n",
"mpl.figure.prototype._canvas_extra_style = function (_canvas_div) {};\n",
"\n",
"mpl.figure.prototype._root_extra_style = function (_canvas_div) {};\n",
"\n",
"mpl.figure.prototype._init_canvas = function () {\n",
" var fig = this;\n",
"\n",
" var canvas_div = (this.canvas_div = document.createElement('div'));\n",
" canvas_div.setAttribute(\n",
" 'style',\n",
" 'border: 1px solid #ddd;' +\n",
" 'box-sizing: content-box;' +\n",
" 'clear: both;' +\n",
" 'min-height: 1px;' +\n",
" 'min-width: 1px;' +\n",
" 'outline: 0;' +\n",
" 'overflow: hidden;' +\n",
" 'position: relative;' +\n",
" 'resize: both;'\n",
" );\n",
"\n",
" function on_keyboard_event_closure(name) {\n",
" return function (event) {\n",
" return fig.key_event(event, name);\n",
" };\n",
" }\n",
"\n",
" canvas_div.addEventListener(\n",
" 'keydown',\n",
" on_keyboard_event_closure('key_press')\n",
" );\n",
" canvas_div.addEventListener(\n",
" 'keyup',\n",
" on_keyboard_event_closure('key_release')\n",
" );\n",
"\n",
" this._canvas_extra_style(canvas_div);\n",
" this.root.appendChild(canvas_div);\n",
"\n",
" var canvas = (this.canvas = document.createElement('canvas'));\n",
" canvas.classList.add('mpl-canvas');\n",
" canvas.setAttribute('style', 'box-sizing: content-box;');\n",
"\n",
" this.context = canvas.getContext('2d');\n",
"\n",
" var backingStore =\n",
" this.context.backingStorePixelRatio ||\n",
" this.context.webkitBackingStorePixelRatio ||\n",
" this.context.mozBackingStorePixelRatio ||\n",
" this.context.msBackingStorePixelRatio ||\n",
" this.context.oBackingStorePixelRatio ||\n",
" this.context.backingStorePixelRatio ||\n",
" 1;\n",
"\n",
" this.ratio = (window.devicePixelRatio || 1) / backingStore;\n",
"\n",
" var rubberband_canvas = (this.rubberband_canvas = document.createElement(\n",
" 'canvas'\n",
" ));\n",
" rubberband_canvas.setAttribute(\n",
" 'style',\n",
" 'box-sizing: content-box; position: absolute; left: 0; top: 0; z-index: 1;'\n",
" );\n",
"\n",
" // Apply a ponyfill if ResizeObserver is not implemented by browser.\n",
" if (this.ResizeObserver === undefined) {\n",
" if (window.ResizeObserver !== undefined) {\n",
" this.ResizeObserver = window.ResizeObserver;\n",
" } else {\n",
" var obs = _JSXTOOLS_RESIZE_OBSERVER({});\n",
" this.ResizeObserver = obs.ResizeObserver;\n",
" }\n",
" }\n",
"\n",
" this.resizeObserverInstance = new this.ResizeObserver(function (entries) {\n",
" var nentries = entries.length;\n",
" for (var i = 0; i < nentries; i++) {\n",
" var entry = entries[i];\n",
" var width, height;\n",
" if (entry.contentBoxSize) {\n",
" if (entry.contentBoxSize instanceof Array) {\n",
" // Chrome 84 implements new version of spec.\n",
" width = entry.contentBoxSize[0].inlineSize;\n",
" height = entry.contentBoxSize[0].blockSize;\n",
" } else {\n",
" // Firefox implements old version of spec.\n",
" width = entry.contentBoxSize.inlineSize;\n",
" height = entry.contentBoxSize.blockSize;\n",
" }\n",
" } else {\n",
" // Chrome <84 implements even older version of spec.\n",
" width = entry.contentRect.width;\n",
" height = entry.contentRect.height;\n",
" }\n",
"\n",
" // Keep the size of the canvas and rubber band canvas in sync with\n",
" // the canvas container.\n",
" if (entry.devicePixelContentBoxSize) {\n",
" // Chrome 84 implements new version of spec.\n",
" canvas.setAttribute(\n",
" 'width',\n",
" entry.devicePixelContentBoxSize[0].inlineSize\n",
" );\n",
" canvas.setAttribute(\n",
" 'height',\n",
" entry.devicePixelContentBoxSize[0].blockSize\n",
" );\n",
" } else {\n",
" canvas.setAttribute('width', width * fig.ratio);\n",
" canvas.setAttribute('height', height * fig.ratio);\n",
" }\n",
" canvas.setAttribute(\n",
" 'style',\n",
" 'width: ' + width + 'px; height: ' + height + 'px;'\n",
" );\n",
"\n",
" rubberband_canvas.setAttribute('width', width);\n",
" rubberband_canvas.setAttribute('height', height);\n",
"\n",
" // And update the size in Python. We ignore the initial 0/0 size\n",
" // that occurs as the element is placed into the DOM, which should\n",
" // otherwise not happen due to the minimum size styling.\n",
" if (fig.ws.readyState == 1 && width != 0 && height != 0) {\n",
" fig.request_resize(width, height);\n",
" }\n",
" }\n",
" });\n",
" this.resizeObserverInstance.observe(canvas_div);\n",
"\n",
" function on_mouse_event_closure(name) {\n",
" return function (event) {\n",
" return fig.mouse_event(event, name);\n",
" };\n",
" }\n",
"\n",
" rubberband_canvas.addEventListener(\n",
" 'mousedown',\n",
" on_mouse_event_closure('button_press')\n",
" );\n",
" rubberband_canvas.addEventListener(\n",
" 'mouseup',\n",
" on_mouse_event_closure('button_release')\n",
" );\n",
" // Throttle sequential mouse events to 1 every 20ms.\n",
" rubberband_canvas.addEventListener(\n",
" 'mousemove',\n",
" on_mouse_event_closure('motion_notify')\n",
" );\n",
"\n",
" rubberband_canvas.addEventListener(\n",
" 'mouseenter',\n",
" on_mouse_event_closure('figure_enter')\n",
" );\n",
" rubberband_canvas.addEventListener(\n",
" 'mouseleave',\n",
" on_mouse_event_closure('figure_leave')\n",
" );\n",
"\n",
" canvas_div.addEventListener('wheel', function (event) {\n",
" if (event.deltaY < 0) {\n",
" event.step = 1;\n",
" } else {\n",
" event.step = -1;\n",
" }\n",
" on_mouse_event_closure('scroll')(event);\n",
" });\n",
"\n",
" canvas_div.appendChild(canvas);\n",
" canvas_div.appendChild(rubberband_canvas);\n",
"\n",
" this.rubberband_context = rubberband_canvas.getContext('2d');\n",
" this.rubberband_context.strokeStyle = '#000000';\n",
"\n",
" this._resize_canvas = function (width, height, forward) {\n",
" if (forward) {\n",
" canvas_div.style.width = width + 'px';\n",
" canvas_div.style.height = height + 'px';\n",
" }\n",
" };\n",
"\n",
" // Disable right mouse context menu.\n",
" this.rubberband_canvas.addEventListener('contextmenu', function (_e) {\n",
" event.preventDefault();\n",
" return false;\n",
" });\n",
"\n",
" function set_focus() {\n",
" canvas.focus();\n",
" canvas_div.focus();\n",
" }\n",
"\n",
" window.setTimeout(set_focus, 100);\n",
"};\n",
"\n",
"mpl.figure.prototype._init_toolbar = function () {\n",
" var fig = this;\n",
"\n",
" var toolbar = document.createElement('div');\n",
" toolbar.classList = 'mpl-toolbar';\n",
" this.root.appendChild(toolbar);\n",
"\n",
" function on_click_closure(name) {\n",
" return function (_event) {\n",
" return fig.toolbar_button_onclick(name);\n",
" };\n",
" }\n",
"\n",
" function on_mouseover_closure(tooltip) {\n",
" return function (event) {\n",
" if (!event.currentTarget.disabled) {\n",
" return fig.toolbar_button_onmouseover(tooltip);\n",
" }\n",
" };\n",
" }\n",
"\n",
" fig.buttons = {};\n",
" var buttonGroup = document.createElement('div');\n",
" buttonGroup.classList = 'mpl-button-group';\n",
" for (var toolbar_ind in mpl.toolbar_items) {\n",
" var name = mpl.toolbar_items[toolbar_ind][0];\n",
" var tooltip = mpl.toolbar_items[toolbar_ind][1];\n",
" var image = mpl.toolbar_items[toolbar_ind][2];\n",
" var method_name = mpl.toolbar_items[toolbar_ind][3];\n",
"\n",
" if (!name) {\n",
" /* Instead of a spacer, we start a new button group. */\n",
" if (buttonGroup.hasChildNodes()) {\n",
" toolbar.appendChild(buttonGroup);\n",
" }\n",
" buttonGroup = document.createElement('div');\n",
" buttonGroup.classList = 'mpl-button-group';\n",
" continue;\n",
" }\n",
"\n",
" var button = (fig.buttons[name] = document.createElement('button'));\n",
" button.classList = 'mpl-widget';\n",
" button.setAttribute('role', 'button');\n",
" button.setAttribute('aria-disabled', 'false');\n",
" button.addEventListener('click', on_click_closure(method_name));\n",
" button.addEventListener('mouseover', on_mouseover_closure(tooltip));\n",
"\n",
" var icon_img = document.createElement('img');\n",
" icon_img.src = '_images/' + image + '.png';\n",
" icon_img.srcset = '_images/' + image + '_large.png 2x';\n",
" icon_img.alt = tooltip;\n",
" button.appendChild(icon_img);\n",
"\n",
" buttonGroup.appendChild(button);\n",
" }\n",
"\n",
" if (buttonGroup.hasChildNodes()) {\n",
" toolbar.appendChild(buttonGroup);\n",
" }\n",
"\n",
" var fmt_picker = document.createElement('select');\n",
" fmt_picker.classList = 'mpl-widget';\n",
" toolbar.appendChild(fmt_picker);\n",
" this.format_dropdown = fmt_picker;\n",
"\n",
" for (var ind in mpl.extensions) {\n",
" var fmt = mpl.extensions[ind];\n",
" var option = document.createElement('option');\n",
" option.selected = fmt === mpl.default_extension;\n",
" option.innerHTML = fmt;\n",
" fmt_picker.appendChild(option);\n",
" }\n",
"\n",
" var status_bar = document.createElement('span');\n",
" status_bar.classList = 'mpl-message';\n",
" toolbar.appendChild(status_bar);\n",
" this.message = status_bar;\n",
"};\n",
"\n",
"mpl.figure.prototype.request_resize = function (x_pixels, y_pixels) {\n",
" // Request matplotlib to resize the figure. Matplotlib will then trigger a resize in the client,\n",
" // which will in turn request a refresh of the image.\n",
" this.send_message('resize', { width: x_pixels, height: y_pixels });\n",
"};\n",
"\n",
"mpl.figure.prototype.send_message = function (type, properties) {\n",
" properties['type'] = type;\n",
" properties['figure_id'] = this.id;\n",
" this.ws.send(JSON.stringify(properties));\n",
"};\n",
"\n",
"mpl.figure.prototype.send_draw_message = function () {\n",
" if (!this.waiting) {\n",
" this.waiting = true;\n",
" this.ws.send(JSON.stringify({ type: 'draw', figure_id: this.id }));\n",
" }\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_save = function (fig, _msg) {\n",
" var format_dropdown = fig.format_dropdown;\n",
" var format = format_dropdown.options[format_dropdown.selectedIndex].value;\n",
" fig.ondownload(fig, format);\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_resize = function (fig, msg) {\n",
" var size = msg['size'];\n",
" if (size[0] !== fig.canvas.width || size[1] !== fig.canvas.height) {\n",
" fig._resize_canvas(size[0], size[1], msg['forward']);\n",
" fig.send_message('refresh', {});\n",
" }\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_rubberband = function (fig, msg) {\n",
" var x0 = msg['x0'] / fig.ratio;\n",
" var y0 = (fig.canvas.height - msg['y0']) / fig.ratio;\n",
" var x1 = msg['x1'] / fig.ratio;\n",
" var y1 = (fig.canvas.height - msg['y1']) / fig.ratio;\n",
" x0 = Math.floor(x0) + 0.5;\n",
" y0 = Math.floor(y0) + 0.5;\n",
" x1 = Math.floor(x1) + 0.5;\n",
" y1 = Math.floor(y1) + 0.5;\n",
" var min_x = Math.min(x0, x1);\n",
" var min_y = Math.min(y0, y1);\n",
" var width = Math.abs(x1 - x0);\n",
" var height = Math.abs(y1 - y0);\n",
"\n",
" fig.rubberband_context.clearRect(\n",
" 0,\n",
" 0,\n",
" fig.canvas.width / fig.ratio,\n",
" fig.canvas.height / fig.ratio\n",
" );\n",
"\n",
" fig.rubberband_context.strokeRect(min_x, min_y, width, height);\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_figure_label = function (fig, msg) {\n",
" // Updates the figure title.\n",
" fig.header.textContent = msg['label'];\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_cursor = function (fig, msg) {\n",
" var cursor = msg['cursor'];\n",
" switch (cursor) {\n",
" case 0:\n",
" cursor = 'pointer';\n",
" break;\n",
" case 1:\n",
" cursor = 'default';\n",
" break;\n",
" case 2:\n",
" cursor = 'crosshair';\n",
" break;\n",
" case 3:\n",
" cursor = 'move';\n",
" break;\n",
" }\n",
" fig.rubberband_canvas.style.cursor = cursor;\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_message = function (fig, msg) {\n",
" fig.message.textContent = msg['message'];\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_draw = function (fig, _msg) {\n",
" // Request the server to send over a new figure.\n",
" fig.send_draw_message();\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_image_mode = function (fig, msg) {\n",
" fig.image_mode = msg['mode'];\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_history_buttons = function (fig, msg) {\n",
" for (var key in msg) {\n",
" if (!(key in fig.buttons)) {\n",
" continue;\n",
" }\n",
" fig.buttons[key].disabled = !msg[key];\n",
" fig.buttons[key].setAttribute('aria-disabled', !msg[key]);\n",
" }\n",
"};\n",
"\n",
"mpl.figure.prototype.handle_navigate_mode = function (fig, msg) {\n",
" if (msg['mode'] === 'PAN') {\n",
" fig.buttons['Pan'].classList.add('active');\n",
" fig.buttons['Zoom'].classList.remove('active');\n",
" } else if (msg['mode'] === 'ZOOM') {\n",
" fig.buttons['Pan'].classList.remove('active');\n",
" fig.buttons['Zoom'].classList.add('active');\n",
" } else {\n",
" fig.buttons['Pan'].classList.remove('active');\n",
" fig.buttons['Zoom'].classList.remove('active');\n",
" }\n",
"};\n",
"\n",
"mpl.figure.prototype.updated_canvas_event = function () {\n",
" // Called whenever the canvas gets updated.\n",
" this.send_message('ack', {});\n",
"};\n",
"\n",
"// A function to construct a web socket function for onmessage handling.\n",
"// Called in the figure constructor.\n",
"mpl.figure.prototype._make_on_message_function = function (fig) {\n",
" return function socket_on_message(evt) {\n",
" if (evt.data instanceof Blob) {\n",
" /* FIXME: We get \"Resource interpreted as Image but\n",
" * transferred with MIME type text/plain:\" errors on\n",
" * Chrome. But how to set the MIME type? It doesn't seem\n",
" * to be part of the websocket stream */\n",
" evt.data.type = 'image/png';\n",
"\n",
" /* Free the memory for the previous frames */\n",
" if (fig.imageObj.src) {\n",
" (window.URL || window.webkitURL).revokeObjectURL(\n",
" fig.imageObj.src\n",
" );\n",
" }\n",
"\n",
" fig.imageObj.src = (window.URL || window.webkitURL).createObjectURL(\n",
" evt.data\n",
" );\n",
" fig.updated_canvas_event();\n",
" fig.waiting = false;\n",
" return;\n",
" } else if (\n",
" typeof evt.data === 'string' &&\n",
" evt.data.slice(0, 21) === 'data:image/png;base64'\n",
" ) {\n",
" fig.imageObj.src = evt.data;\n",
" fig.updated_canvas_event();\n",
" fig.waiting = false;\n",
" return;\n",
" }\n",
"\n",
" var msg = JSON.parse(evt.data);\n",
" var msg_type = msg['type'];\n",
"\n",
" // Call the \"handle_{type}\" callback, which takes\n",
" // the figure and JSON message as its only arguments.\n",
" try {\n",
" var callback = fig['handle_' + msg_type];\n",
" } catch (e) {\n",
" console.log(\n",
" \"No handler for the '\" + msg_type + \"' message type: \",\n",
" msg\n",
" );\n",
" return;\n",
" }\n",
"\n",
" if (callback) {\n",
" try {\n",
" // console.log(\"Handling '\" + msg_type + \"' message: \", msg);\n",
" callback(fig, msg);\n",
" } catch (e) {\n",
" console.log(\n",
" \"Exception inside the 'handler_\" + msg_type + \"' callback:\",\n",
" e,\n",
" e.stack,\n",
" msg\n",
" );\n",
" }\n",
" }\n",
" };\n",
"};\n",
"\n",
"// from http://stackoverflow.com/questions/1114465/getting-mouse-location-in-canvas\n",
"mpl.findpos = function (e) {\n",
" //this section is from http://www.quirksmode.org/js/events_properties.html\n",
" var targ;\n",
" if (!e) {\n",
" e = window.event;\n",
" }\n",
" if (e.target) {\n",
" targ = e.target;\n",
" } else if (e.srcElement) {\n",
" targ = e.srcElement;\n",
" }\n",
" if (targ.nodeType === 3) {\n",
" // defeat Safari bug\n",
" targ = targ.parentNode;\n",
" }\n",
"\n",
" // pageX,Y are the mouse positions relative to the document\n",
" var boundingRect = targ.getBoundingClientRect();\n",
" var x = e.pageX - (boundingRect.left + document.body.scrollLeft);\n",
" var y = e.pageY - (boundingRect.top + document.body.scrollTop);\n",
"\n",
" return { x: x, y: y };\n",
"};\n",
"\n",
"/*\n",
" * return a copy of an object with only non-object keys\n",
" * we need this to avoid circular references\n",
" * http://stackoverflow.com/a/24161582/3208463\n",
" */\n",
"function simpleKeys(original) {\n",
" return Object.keys(original).reduce(function (obj, key) {\n",
" if (typeof original[key] !== 'object') {\n",
" obj[key] = original[key];\n",
" }\n",
" return obj;\n",
" }, {});\n",
"}\n",
"\n",
"mpl.figure.prototype.mouse_event = function (event, name) {\n",
" var canvas_pos = mpl.findpos(event);\n",
"\n",
" if (name === 'button_press') {\n",
" this.canvas.focus();\n",
" this.canvas_div.focus();\n",
" }\n",
"\n",
" var x = canvas_pos.x * this.ratio;\n",
" var y = canvas_pos.y * this.ratio;\n",
"\n",
" this.send_message(name, {\n",
" x: x,\n",
" y: y,\n",
" button: event.button,\n",
" step: event.step,\n",
" guiEvent: simpleKeys(event),\n",
" });\n",
"\n",
" /* This prevents the web browser from automatically changing to\n",
" * the text insertion cursor when the button is pressed. We want\n",
" * to control all of the cursor setting manually through the\n",
" * 'cursor' event from matplotlib */\n",
" event.preventDefault();\n",
" return false;\n",
"};\n",
"\n",
"mpl.figure.prototype._key_event_extra = function (_event, _name) {\n",
" // Handle any extra behaviour associated with a key event\n",
"};\n",
"\n",
"mpl.figure.prototype.key_event = function (event, name) {\n",
" // Prevent repeat events\n",
" if (name === 'key_press') {\n",
" if (event.which === this._key) {\n",
" return;\n",
" } else {\n",
" this._key = event.which;\n",
" }\n",
" }\n",
" if (name === 'key_release') {\n",
" this._key = null;\n",
" }\n",
"\n",
" var value = '';\n",
" if (event.ctrlKey && event.which !== 17) {\n",
" value += 'ctrl+';\n",
" }\n",
" if (event.altKey && event.which !== 18) {\n",
" value += 'alt+';\n",
" }\n",
" if (event.shiftKey && event.which !== 16) {\n",
" value += 'shift+';\n",
" }\n",
"\n",
" value += 'k';\n",
" value += event.which.toString();\n",
"\n",
" this._key_event_extra(event, name);\n",
"\n",
" this.send_message(name, { key: value, guiEvent: simpleKeys(event) });\n",
" return false;\n",
"};\n",
"\n",
"mpl.figure.prototype.toolbar_button_onclick = function (name) {\n",
" if (name === 'download') {\n",
" this.handle_save(this, null);\n",
" } else {\n",
" this.send_message('toolbar_button', { name: name });\n",
" }\n",
"};\n",
"\n",
"mpl.figure.prototype.toolbar_button_onmouseover = function (tooltip) {\n",
" this.message.textContent = tooltip;\n",
"};\n",
"\n",
"///////////////// REMAINING CONTENT GENERATED BY embed_js.py /////////////////\n",
"// prettier-ignore\n",
"var _JSXTOOLS_RESIZE_OBSERVER=function(A){var t,i=new WeakMap,n=new WeakMap,a=new WeakMap,r=new WeakMap,o=new Set;function s(e){if(!(this instanceof s))throw new TypeError(\"Constructor requires 'new' operator\");i.set(this,e)}function h(){throw new TypeError(\"Function is not a constructor\")}function c(e,t,i,n){e=0 in arguments?Number(arguments[0]):0,t=1 in arguments?Number(arguments[1]):0,i=2 in arguments?Number(arguments[2]):0,n=3 in arguments?Number(arguments[3]):0,this.right=(this.x=this.left=e)+(this.width=i),this.bottom=(this.y=this.top=t)+(this.height=n),Object.freeze(this)}function d(){t=requestAnimationFrame(d);var s=new WeakMap,p=new Set;o.forEach((function(t){r.get(t).forEach((function(i){var r=t instanceof window.SVGElement,o=a.get(t),d=r?0:parseFloat(o.paddingTop),f=r?0:parseFloat(o.paddingRight),l=r?0:parseFloat(o.paddingBottom),u=r?0:parseFloat(o.paddingLeft),g=r?0:parseFloat(o.borderTopWidth),m=r?0:parseFloat(o.borderRightWidth),w=r?0:parseFloat(o.borderBottomWidth),b=u+f,F=d+l,v=(r?0:parseFloat(o.borderLeftWidth))+m,W=g+w,y=r?0:t.offsetHeight-W-t.clientHeight,E=r?0:t.offsetWidth-v-t.clientWidth,R=b+v,z=F+W,M=r?t.width:parseFloat(o.width)-R-E,O=r?t.height:parseFloat(o.height)-z-y;if(n.has(t)){var k=n.get(t);if(k[0]===M&&k[1]===O)return}n.set(t,[M,O]);var S=Object.create(h.prototype);S.target=t,S.contentRect=new c(u,d,M,O),s.has(i)||(s.set(i,[]),p.add(i)),s.get(i).push(S)}))})),p.forEach((function(e){i.get(e).call(e,s.get(e),e)}))}return s.prototype.observe=function(i){if(i instanceof window.Element){r.has(i)||(r.set(i,new Set),o.add(i),a.set(i,window.getComputedStyle(i)));var n=r.get(i);n.has(this)||n.add(this),cancelAnimationFrame(t),t=requestAnimationFrame(d)}},s.prototype.unobserve=function(i){if(i instanceof window.Element&&r.has(i)){var n=r.get(i);n.has(this)&&(n.delete(this),n.size||(r.delete(i),o.delete(i))),n.size||r.delete(i),o.size||cancelAnimationFrame(t)}},A.DOMRectReadOnly=c,A.ResizeObserver=s,A.ResizeObserverEntry=h,A}; // eslint-disable-line\n",
"mpl.toolbar_items = [[\"Home\", \"Reset original view\", \"fa fa-home icon-home\", \"home\"], [\"Back\", \"Back to previous view\", \"fa fa-arrow-left icon-arrow-left\", \"back\"], [\"Forward\", \"Forward to next view\", \"fa fa-arrow-right icon-arrow-right\", \"forward\"], [\"\", \"\", \"\", \"\"], [\"Pan\", \"Left button pans, Right button zooms\\nx/y fixes axis, CTRL fixes aspect\", \"fa fa-arrows icon-move\", \"pan\"], [\"Zoom\", \"Zoom to rectangle\\nx/y fixes axis, CTRL fixes aspect\", \"fa fa-square-o icon-check-empty\", \"zoom\"], [\"\", \"\", \"\", \"\"], [\"Download\", \"Download plot\", \"fa fa-floppy-o icon-save\", \"download\"]];\n",
"\n",
"mpl.extensions = [\"eps\", \"jpeg\", \"pdf\", \"png\", \"ps\", \"raw\", \"svg\", \"tif\"];\n",
"\n",
"mpl.default_extension = \"png\";/* global mpl */\n",
"\n",
"var comm_websocket_adapter = function (comm) {\n",
" // Create a \"websocket\"-like object which calls the given IPython comm\n",
" // object with the appropriate methods. Currently this is a non binary\n",
" // socket, so there is still some room for performance tuning.\n",
" var ws = {};\n",
"\n",
" ws.close = function () {\n",
" comm.close();\n",
" };\n",
" ws.send = function (m) {\n",
" //console.log('sending', m);\n",
" comm.send(m);\n",
" };\n",
" // Register the callback with on_msg.\n",
" comm.on_msg(function (msg) {\n",
" //console.log('receiving', msg['content']['data'], msg);\n",
" // Pass the mpl event to the overridden (by mpl) onmessage function.\n",
" ws.onmessage(msg['content']['data']);\n",