public void Convolve2DPassesGradientCheck() { //int[] poolingShape = new int[] { 1, 1 }; int[] kernelShape = new int[] { 7, 7 }; int[] inputShape = new int[] { 100, 100 }; var iS = NN.Array(inputShape).As <float>(); var kS = NN.Array(kernelShape).As <float>(); // layers var W = T.Shared(NN.Random.Uniform(-0.01f, 0.01f, kernelShape).As <float>(), "W"); //var flatShape = ((inputShape[0] + kernelShape[0] - 1) / poolingShape[0] ) * ((inputShape[1] + kernelShape[1] - 1) / poolingShape[1] ); var flatShape = ((inputShape[0] + kernelShape[0] - 1)) * ((inputShape[1] + kernelShape[1] - 1)); var scaling = (((iS[0] + kS[0] - 1f)) + ((iS[1] + kS[1] - 1f))); var S = T.Shared(NN.Random.Uniform(-10f, 10f, 2, flatShape).As <float>() / scaling, "S"); var Sb = T.Shared(NN.Zeros <float>(2, 1), "Sb"); var x = T.Matrix <float>(inputShape[0], inputShape[1], "x"); // [inputLength] var h = T.Sigmoid(T.Convolve2d(x, W, mode: ConvMode.Full)); //h = T.MaxPooling2d(h, poolingShape[0], poolingShape[1], true); h = h.Reshape(flatShape, 1); var debug = (T.Dot(S, h) + Sb).Reshape(2); var pred = T.Softmax(debug); var nll = -T.Mean(T.Log(pred)[1]); AssertTensor.PassesGradientCheck(x, nll, W, relativeErr: 1e-3f, absErr: 1e-3f); }
public Shared() { NN.Random.Seed(123); // setting seed of NumNet // creating word2vec var matrix = NN.Random.Normal(0, 1, 10, 4); var words = new string[10] { "a", "b", "c", "d", "e", "f", "g", "h", "i", "j" }; this.W2v = new Word2Vec(words, matrix); // creating sample vectors for neighbor search Test1 = NN.Random.Normal(0, 1, 4); // vector Test2 = NN.Random.Normal(0, 1, 4); // second vector var values = new float[8]; for (int i = 0; i < 4; i++) { values[2 * i] = Test1.Values[i]; values[2 * i + 1] = Test2.Values[i]; } Test3 = NN.Array(values).Reshape(4, 2); // concat of the 2 first vectors }
public void TestSumInt4_2() { var a = NN.Range(24).Reshape(1, 2, 3, 4); var b = NN.Range(12).Reshape(3, 4); var expected = NN.Array(new int[, , , ] { { { { 0, 2, 4, 6 }, { 8, 10, 12, 14 }, { 16, 18, 20, 22 } }, { { 12, 14, 16, 18 }, { 20, 22, 24, 26 }, { 28, 30, 32, 34 } } } }); var c = a + b; AssertArray.AreEqual(expected.Values, c.Values); }
public void TestScanOnTanhSumDot() { var W = T.Shared(0.2f * NN.Random.Uniform(-1.0f, 1.0f, 4, 5).As <float>(), "W"); Func <Tensor <float>, Tensor <float>, Tensor <float> > recurrence = (x, acc) => T.Tanh(acc + T.Dot(W, x)); var X = T.Matrix <float>(-1, 5, "X"); var acc0 = T.Shared(NN.Zeros <float>(4), "acc0"); var result = T.Scan(fn: recurrence, sequences: new[] { X }, outputsInfo: acc0); var norm2 = T.Norm2(result[-1]); var f = T.Function(X, norm2); var grad = T.Grad(norm2, W); var df = T.Function(input: X, output: (norm2, grad)); df(NN.Array(new[, ] { { 0f, 0f, 0f, 0f, 0f } })); AssertTensor.PassesGradientCheck(X, norm2, acc0); AssertTensor.PassesGradientCheck(X, norm2, W); }
public void TestShuffleInplaceDim3() { var a = NN.Range(24).Reshape(3, 2, 4); var b = NN.Range(24).Reshape(3, 2, 4); var c = NN.Range(24).Reshape(3, 2, 4); int[] perms1 = new int[3] { 0, 2, 1 }; int[] perms2 = new int[2] { 1, 0 }; int[] perms3 = new int[4] { 3, 1, 0, 2 }; var expected1 = NN.Array(new int[24] { 0, 1, 2, 3, 4, 5, 6, 7, 16, 17, 18, 19, 20, 21, 22, 23, 8, 9, 10, 11, 12, 13, 14, 15 }).Reshape(3, 2, 4); var expected2 = NN.Array(new int[24] { 4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11, 20, 21, 22, 23, 16, 17, 18, 19 }).Reshape(3, 2, 4); var expected3 = NN.Array(new int[24] { 2, 1, 3, 0, 6, 5, 7, 4, 10, 9, 11, 8, 14, 13, 15, 12, 18, 17, 19, 16, 22, 21, 23, 20 }).Reshape(3, 2, 4); a.ShuffleInplace(perms: perms1); b.ShuffleInplace(perms: perms2, axis: 1); c.ShuffleInplace(perms: perms3, axis: 2); AssertArray.AreEqual(a, expected1); AssertArray.AreEqual(b, expected2); AssertArray.AreEqual(c, expected3); }
public void CanBroadcast_2_to_3_1() { // http://www.onlamp.com/pub/a/python/2000/09/27/numerically.html?page=2 /* * * z=np.array([1, 2]) * v=np.array([[3], [4], [5]]) * z+v * */ // When comparing the size of each axis, if either one of the compared axes has a size of one, broadcasting can also occur var z = NN.Array(new[] { 1, 2 }); AssertArray.AreEqual(z.Shape, new int[] { 2 }); var v = NN.Array(new[, ] { { 3 }, { 4 }, { 5 } }); AssertArray.AreEqual(v.Shape, new int[] { 3, 1 }); AssertArray.AreEqual(new[, ] { { 4, 5 }, { 5, 6 }, { 6, 7 } }, z + v); //In this form, the first multiarray z was extended to a (3,2) multiarray and the second multiarray v was extended to a (3,2) multiarray. //Essentially, broadcasting occurred on both operands! This only occurs when the axis size of one of the multiarrays has the value of one. }
public void TestCombine() { var t = NN.Ones <float>(5, 4, 3); t[2, _, _] *= 2; t[_, 1, _] *= -1; t[_, _, 2] *= 3; var x = NN.Array <float>(1, -1, 3); var y = NN.Array <float>(1, -1, 3, 2); var txy = t.Combine(x, y); var z = NN.Zeros <float>(5); for (int k = 0; k < z.Shape[0]; ++k) { for (int j = 0; j < y.Shape[0]; ++j) { for (int i = 0; i < x.Shape[0]; ++i) { z.Item[k] += t.Item[k, j, i] * y.Item[j] * x.Item[i]; } } } var expected = new float[] { 63, 63, 126, 63, 63 }; AssertArray.AreAlmostEqual(expected, z); AssertArray.AreAlmostEqual(expected, t.Dot(x).Dot(y)); AssertArray.AreAlmostEqual(expected, txy); }
public void TestOnehotDotM() { var M = T.Matrix <float>("M"); var X = T.Matrix <float>("X"); var a = T.Vector <float>("a"); var oneHot = T.OneHot(X.Shape, 1, a); var B = T.Dot(oneHot, M); var M_ = NN.Array(new float[, ] { { 0, 3, 7 }, { 5, 2, 0 } }); var X_ = NN.Zeros(4, 2); var a_ = NN.Array <float>(1, -1); var B_ = Op.Function(input: (M, X, a), output: B); var B_pred = B_(M_, X_, a_); var Y_ = X_.Copy(); Y_[1] = a_; var B_exp = Y_.Dot(M_); AssertArray.AreEqual(B_exp, B_pred); }
public void CanBroadcast_3_to_2_3() { // http://www.onlamp.com/pub/a/python/2000/09/27/numerically.html?page=2 /* * * a = np.array([[1, 2, 3], [4, 5, 6]]) * b = np.array([[7, 8, 9]]) * a + b * */ var a = NN.Array(new[, ] { { 1, 2, 3 }, { 4, 5, 6 } }); var c = NN.Array(new[] { 7, 8, 9 }); var rAdd = NN.Array(new[, ] { { 8, 10, 12 }, { 11, 13, 15 } }); var rMul = NN.Array(new[, ] { { 7, 16, 27 }, { 28, 40, 54 } }); a.AssertOfShape(2, 3); c.AssertOfShape(3); AssertArray.GenerateTests(a, c, NN.Ones <int>, (a1, c1) => AssertArray.AreEqual(rAdd, a1 + c1)); AssertArray.GenerateTests(a, c, NN.Zeros <int>, (a1, c1) => AssertArray.AreEqual(rMul, a1 * c1)); }
public void TestCombineWithBias3D_1D() { var t = NN.Ones <float>(6, 5, 4); t[2, _, _] *= 2; t[_, 1, _] *= -1; t[_, _, 2] *= 3; var x = NN.Array <float>(1, -1, -2); var y = NN.Array <float>(1, -1, 3, 1); Array <float> txy2 = null; txy2 = t[_, Upto(-1), Upto(-1)].Combine(x, y, result: txy2); var txy = t.CombineWithBias(x, y); var xb = NN.Ones <float>(4); xb[Upto(-1)] = x; var yb = NN.Ones <float>(5); yb[Upto(-1)] = y; var txbyb = t.Combine(xb, yb); AssertArray.AreAlmostEqual(txbyb, txy); }
public static Array <float> PseudoInv(Array <float> a) { // https://en.wikipedia.org/wiki/Moore–Penrose_pseudoinverse // http://vene.ro/blog/inverses-pseudoinverses-numerical-issues-speed-symmetry.html // https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/296030 // dgelss can do the job with one input your matrix, the other the unit matrix // http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2&t=160 var m = a.Shape[0]; var n = a.Shape[1]; /* Compute SVD */ var k = Math.Min(m, n); var s = new float[k]; var u = NN.Zeros <float>(m, m); var vt = NN.Zeros <float>(n, n); var copy = (float[])a.Values.Clone(); // if (jobu != 'O' && jobv != 'O') a is destroyed by dgesdv (https://software.intel.com/en-us/node/521150) var superb = new float[k - 1]; Lapack.gesvd('A', 'A', m, n, copy, n, s, u.Values, m, vt.Values, n, superb); var invSigma = NN.Zeros <float>(n, m); invSigma[Range(0, k), Range(0, k)] = NN.Diag(1 / NN.Array(s)); var pseudoInv = vt.T.Dot(invSigma).Dot(u.T); return(pseudoInv); }
public void TestLtOnInt() { AssertArray.AreEqual( NN.Array(1, 5, 3, 2, 0, -1) < NN.Array(0, 6, 1, 3, 1, 2), NN.Array(0, 1, 0, 1, 1, 1) ); }
public void ComplexReshapeWorksWithCopyFlag() { var a = NN.Range(4 * 3).Reshape(4, 3); var exp = NN.Array(new int[] { 0, 3, 6, 9, 1, 4, 7, 10, 2, 5, 8, 11 }); AssertArray.AreEqual(exp, a.T.Reshape(new int[] { -1 }, allowCopy: true)); }
public void TestLogSumExp() { var X = NN.Array(new float[, ] { { 1, 3 }, { 2, 5 } }); AssertArray.AreAlmostEqual(NN.Log(NN.Sum(NN.Exp(X), axis: -1)), NN.LogSumExp(X)); }
public void ArgmaxWorksOnVec() { Assert.AreEqual(2, NN.Array(0, 1, 5, 1, 0).Argmax()); AssertArray.AreEqual( NN.Array(2), NN.Array(0, 1, 5, 1, 0).Argmax(axis: 0, keepDims: true)); }
public static void Test2() { var a = T.Matrix <float>("a"); // declare variable var @out = a + T.Pow(a, 10); // build symbolic expression var f = T.Function(a, @out); // compile function Console.WriteLine(f(NN.Array <float>(0, 1, 2))); // prints `array([0, 2, 1026])` }
public void TestMax() { var a = NN.Range(4 * 2).Reshape(4, 2); a.Item[2, 1] = -1; a.Item[0, 1] = 10; AssertArray.AreEqual(NN.Array(new[] { 6, 10 }), a.Max(axis: 0)); AssertArray.AreEqual(NN.Array(new[] { 10, 3, 4, 7 }), a.Max(axis: 1)); }
public void TestMin() { var a = NN.Range(4 * 2).Reshape(4, 2); a.Item[2, 1] = -1; a.Item[0, 1] = 10; AssertArray.AreEqual(NN.Array(new[] { 0, -1 }), a.Min(axis: 0)); AssertArray.AreEqual(NN.Array(new[] { 0, 2, -1, 6 }), a.Min(axis: 1)); }
public void FailMissingVariable() { var x = T.Matrix <float>("x"); var y = T.Matrix <float>("y"); var z = x + y; var f = T.Function(x, z); // "y" is missing f(NN.Array(1f, 2f, 3f)); // should throw exception }
public void TestSoftmax2D() { var X = NN.Array(new float[, ] { { 1, 3 }, { 2, 5 } }); AssertArray.AreAlmostEqual(new float[, ] { { 0.11920292f, 0.88079708f }, { 0.04742587f, 0.95257413f } }, NN.Softmax(X)); }
public void CanReshape_6_to_2_3_WithForcedCopy() { var a0 = NN.Range(6); var b = NN.Array(new[, ] { { 0, 1, 2 }, { 3, 4, 5 } }); AssertArray.GenerateTests(a0, a => AssertArray.AreEqual(b, a.Reshape(new[] { 2, 3 }, forceCopy: true))); }
public void CanReshape_6_to_2_3() { var a0 = NN.Range(6); var b = NN.Array(new[, ] { { 0, 1, 2 }, { 3, 4, 5 } }); AssertArray.GenerateTests(a0, a => AssertArray.AreEqual(b, a.Reshape(2, 3))); }
public void TestSolve() { /* Solve the equations A*X = B */ // https://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_lapack_examples/dgesv_ex.c.htm const int N = 5; const int NRHS = 3; const int LDA = N; const int LDB = NRHS; int n = N, nrhs = NRHS, lda = LDA, ldb = LDB; /* Local arrays */ int[] ipiv = new int[N]; double[] a = new double[N * N] { 6.80, -6.05, -0.45, 8.32, -9.67, -2.11, -3.30, 2.58, 2.71, -5.14, 5.66, 5.36, -2.70, 4.35, -7.26, 5.97, -4.44, 0.27, -7.17, 6.08, 8.23, 1.08, 9.04, 2.14, -6.87 }; double[] b = new double[N * NRHS] { 4.02, -1.56, 9.81, 6.19, 4.00, -4.09, -8.22, -8.67, -4.57, -7.57, 1.75, -8.61, -3.03, 2.86, 8.99 }; /* Solve the equations A*X = B */ Lapack.gesv(n, nrhs, a, lda, ipiv, b, ldb); // Solution var solution = NN.Array(new[] { -0.80, -0.39, 0.96, -0.70, -0.55, 0.22, 0.59, 0.84, 1.90, 1.32, -0.10, 5.36, 0.57, 0.11, 4.04, }).Reshape(n, nrhs); AssertArray.AreAlmostEqual(solution, NN.Array(b).Reshape(N, NRHS), 1e-2, 1e-2); // Details of LU factorization var luFactorization = NN.Array(new[] { 8.23, 1.08, 9.04, 2.14, -6.87, 0.83, -6.94, -7.92, 6.55, -3.99, 0.69, -0.67, -14.18, 7.24, -5.19, 0.73, 0.75, 0.02, -13.82, 14.19, -0.26, 0.44, -0.59, -0.34, -3.43, }).Reshape(n, n); AssertArray.AreAlmostEqual(luFactorization, NN.Array(a).Reshape(n, n), 1e-2, 1e-2); // Pivot indices AssertArray.AreEqual(new[] { 5, 5, 3, 4, 5 }, ipiv); }
public void TestVector() { var v1 = NN.Array <float>(0, 1, 2); var v = new Array <float>(3); for (int i = 0; i < 3; i++) { v.Item[i] = i; } AssertArray.AreAlmostEqual(v, v1); }
public void TestCombine2() { var a = NN.Ones <float>(4, 5, 6); var x = NN.Ones <float>(6); var y = NN.Ones <float>(5); var z = a.Combine(x, y); var expected = NN.Array <float>(30, 30, 30, 30); AssertArray.AreAlmostEqual(expected, z); }
public void ArgmaxWorksOnMatrix() { var a = NN.Array(new [, ] { { 0, 10 }, { 2, 3 }, { 4, -1 }, { 6, 7 } }); AssertArray.AreEqual(NN.Array(new[] { 3, 0 }), a.Argmax(axis: 0)); AssertArray.AreEqual(NN.Array(new[] { 1, 1, 0, 1 }), a.Argmax(axis: 1)); }
public void TestShuffleInPlaceDim1() { var a = NN.Range(10); int[] perm = new int[10] { 1, 2, 3, 8, 9, 0, 4, 6, 7, 5 }; var expected = NN.Array(new int[10] { 5, 0, 1, 2, 6, 9, 7, 8, 3, 4 }); a.ShuffleInplace(perm); AssertArray.AreEqual(a, expected); }
public void CanReshapeReversedArray() { var a = NN.Zeros <int>(6); a[Step(-1)] = NN.Range(6); a = a[Step(-1)]; var b = NN.Array(new[, ] { { 0, 1, 2 }, { 3, 4, 5 } }); AssertArray.AreEqual(NN.Range(6), a); AssertArray.AreEqual(b, a.Reshape(2, 3)); }
public void TestMatrixFromArray() { var m1 = NN.Array( new [] { 0, 1, 2, 3 }, new [] { 1, 2, 3, 4 }, new [] { 2, 3, 4, 5 } ); var m2 = NN.Array(new [, ] { { 0, 1, 2, 3 }, { 1, 2, 3, 4 }, { 2, 3, 4, 5 } }); AssertArray.AreEqual(m2, m1); }
public void TestSimpleReshape() { var a = NN.Range(4 * 3).Reshape(4, 3); var exp = NN.Array(new int[, ] { { 0, 1, 2 }, { 3, 4, 5 }, { 6, 7, 8 }, { 9, 10, 11 } }); AssertArray.AreEqual(exp, a); AssertArray.AreEqual(exp, NN.Range(4 * 3).Reshape(4, -1)); AssertArray.AreEqual(exp, NN.Range(4 * 3).Reshape(-1, 3)); AssertArray.AreEqual(NN.Range(12), a.Reshape(-1)); }