private static Matrix <float> ToMatrix <T>(T[][] data) where T : struct { int elementCount = 0; int structureSize = Emgu.Util.Toolbox.SizeOf <T>(); int floatValueInStructure = structureSize / Emgu.Util.Toolbox.SizeOf <float>(); foreach (T[] d in data) { elementCount += d.Length; } Matrix <float> res = new Matrix <float>(elementCount, floatValueInStructure); Int64 address = res.MCvMat.Data.ToInt64(); foreach (T[] d in data) { int lengthInBytes = d.Length * structureSize; GCHandle handle = GCHandle.Alloc(d, GCHandleType.Pinned); CvToolbox.Memcpy(new IntPtr(address), handle.AddrOfPinnedObject(), lengthInBytes); handle.Free(); address += lengthInBytes; } return(res); }
private static void ConvertColor(IntPtr src, IntPtr dest, Type srcColor, Type destColor, Size size, Stream stream) { try { // if the direct conversion exist, apply the conversion GpuInvoke.CvtColor(src, dest, CvToolbox.GetColorCvtCode(srcColor, destColor), stream); } catch { try { //if a direct conversion doesn't exist, apply a two step conversion //in this case, needs to wait for the completion of the stream because a temporary local image buffer is used //we don't want the tmp image to be released before the operation is completed. using (GpuImage <Bgr, TDepth> tmp = new GpuImage <Bgr, TDepth>(size)) { GpuInvoke.CvtColor(src, tmp.Ptr, CvToolbox.GetColorCvtCode(srcColor, typeof(Bgr)), stream); GpuInvoke.CvtColor(tmp.Ptr, dest, CvToolbox.GetColorCvtCode(typeof(Bgr), destColor), stream); stream.WaitForCompletion(); } } catch { throw new NotSupportedException(String.Format( "Convertion from Image<{0}, {1}> to Image<{2}, {3}> is not supported by OpenCV", srcColor.ToString(), typeof(TDepth).ToString(), destColor.ToString(), typeof(TDepth).ToString())); } } }
private static void ConvertColor(IntPtr src, IntPtr dest, Type srcColor, Type destColor, Size size, Stream stream) { try { // if the direct conversion exist, apply the conversion GpuInvoke.CvtColor(src, dest, CvToolbox.GetColorCvtCode(srcColor, destColor), stream); } catch { try { //if a direct conversion doesn't exist, apply a two step conversion using (GpuImage <Bgr, TDepth> tmp = new GpuImage <Bgr, TDepth>(size)) { GpuInvoke.CvtColor(src, tmp.Ptr, CvToolbox.GetColorCvtCode(srcColor, typeof(Bgr)), stream); GpuInvoke.CvtColor(tmp.Ptr, dest, CvToolbox.GetColorCvtCode(typeof(Bgr), destColor), stream); } } catch { throw new NotSupportedException(String.Format( "Convertion from Image<{0}, {1}> to Image<{2}, {3}> is not supported by OpenCV", srcColor.ToString(), typeof(TDepth).ToString(), destColor.ToString(), typeof(TDepth).ToString())); } } }
private static void ConvertColor(IInputArray src, IOutputArray dest, Type srcColor, Type destColor, int dcn, Size size, Stream stream) { try { // if the direct conversion exist, apply the conversion CudaInvoke.CvtColor(src, dest, CvToolbox.GetColorCvtCode(srcColor, destColor), dcn, stream); } catch { try { //if a direct conversion doesn't exist, apply a two step conversion //in this case, needs to wait for the completion of the stream because a temporary local image buffer is used //we don't want the tmp image to be released before the operation is completed. using (CudaImage <Bgr, TDepth> tmp = new CudaImage <Bgr, TDepth>(size)) { CudaInvoke.CvtColor(src, tmp, CvToolbox.GetColorCvtCode(srcColor, typeof(Bgr)), 3, stream); CudaInvoke.CvtColor(tmp, dest, CvToolbox.GetColorCvtCode(typeof(Bgr), destColor), dcn, stream); stream.WaitForCompletion(); } } catch (Exception excpt) { throw new NotSupportedException(String.Format( "Conversion from CudaImage<{0}, {1}> to CudaImage<{2}, {3}> is not supported by OpenCV: {4}", srcColor.ToString(), typeof(TDepth).ToString(), destColor.ToString(), typeof(TDepth).ToString(), excpt.Message)); } } }
private static Matrix<float> ToMatrix<T>(T[][] data) where T: struct { int elementCount = 0; #if NETFX_CORE int structureSize = Marshal.SizeOf<T>(); int floatValueInStructure = structureSize / Marshal.SizeOf<float>(); #else int structureSize = Marshal.SizeOf(typeof(T)); int floatValueInStructure = structureSize / Marshal.SizeOf(typeof(float)); #endif foreach (T[] d in data) elementCount += d.Length; Matrix<float> res = new Matrix<float>(elementCount, floatValueInStructure); Int64 address = res.MCvMat.Data.ToInt64(); foreach (T[] d in data) { int lengthInBytes = d.Length * structureSize; GCHandle handle = GCHandle.Alloc(d, GCHandleType.Pinned); CvToolbox.Memcpy(new IntPtr(address), handle.AddrOfPinnedObject(), lengthInBytes); handle.Free(); address += lengthInBytes; } return res; }
/* * /// <summary> * /// Create a LevMarqSparse solver * /// </summary> * public LevMarqSparse() * { * _ptr = CvInvoke.CvCreateLevMarqSparse(); * }*/ /// <summary> /// Useful function to do simple bundle adjustment tasks /// </summary> /// <param name="points">Positions of points in global coordinate system (input and output), values will be modified by bundle adjustment</param> /// <param name="imagePoints">Projections of 3d points for every camera</param> /// <param name="visibility">Visibility of 3d points for every camera</param> /// <param name="cameraMatrix">Intrinsic matrices of all cameras (input and output), values will be modified by bundle adjustment</param> /// <param name="R">rotation matrices of all cameras (input and output), values will be modified by bundle adjustment</param> /// <param name="T">translation vector of all cameras (input and output), values will be modified by bundle adjustment</param> /// <param name="distCoeffcients">distortion coefficients of all cameras (input and output), values will be modified by bundle adjustment</param> /// <param name="termCrit">Termination criteria, a reasonable value will be (30, 1.0e-12) </param> public static void BundleAdjust( MCvPoint3D64f[] points, MCvPoint2D64f[][] imagePoints, int[][] visibility, Matrix <double>[] cameraMatrix, Matrix <double>[] R, Matrix <double>[] T, Matrix <double>[] distCoeffcients, MCvTermCriteria termCrit) { using (Matrix <double> imagePointsMat = CvToolbox.GetMatrixFromPoints(imagePoints)) using (Matrix <int> visibilityMat = CvToolbox.GetMatrixFromArrays(visibility)) using (VectorOfMat cameraMatVec = new VectorOfMat()) using (VectorOfMat rMatVec = new VectorOfMat()) using (VectorOfMat tMatVec = new VectorOfMat()) using (VectorOfMat distorMatVec = new VectorOfMat()) { cameraMatVec.Push(cameraMatrix); rMatVec.Push(R); tMatVec.Push(T); distorMatVec.Push(distCoeffcients); GCHandle handlePoints = GCHandle.Alloc(points, GCHandleType.Pinned); CvInvoke.CvLevMarqSparseAdjustBundle( cameraMatrix.Length, points.Length, handlePoints.AddrOfPinnedObject(), imagePointsMat, visibilityMat, cameraMatVec, rMatVec, tMatVec, distorMatVec, ref termCrit); handlePoints.Free(); } }
/// <summary> /// Create k-d feature trees using the Image feature extracted from the model image. /// </summary> /// <param name="modelFeatures">The Image feature extracted from the model image</param> public ImageFeatureMatcher(ImageFeature[] modelFeatures) { Debug.Assert(modelFeatures.Length > 0, "Model Features should have size > 0"); _modelIndex = new Flann.Index( CvToolbox.GetMatrixFromDescriptors( Array.ConvertAll <ImageFeature, float[]>( modelFeatures, delegate(ImageFeature f) { return(f.Descriptor); })), 1); _modelFeatures = modelFeatures; }
/// <summary> /// Create a GpuMat of the specified size /// </summary> /// <param name="rows">The number of rows (height)</param> /// <param name="cols">The number of columns (width)</param> /// <param name="channels">The number of channels</param> /// <param name="continuous">Indicates if the data should be continuous</param> public GpuMat(int rows, int cols, int channels, bool continuous) { int matType = CvInvoke.CV_MAKETYPE((int)CvToolbox.GetMatrixDepth(typeof(TDepth)), channels); if (continuous) { _ptr = GpuInvoke.GpuMatCreateContinuous(rows, cols, matType); } else { _ptr = GpuInvoke.GpuMatCreate(rows, cols, matType); } }
/// <summary> /// return an enumerator of the elements in the sequence /// </summary> /// <returns>an enumerator of the elements in the sequence</returns> public IEnumerator <T> GetEnumerator() { using (PinnedArray <T> buffer = new PinnedArray <T>(1)) { IntPtr address = buffer.AddrOfPinnedObject(); for (int i = 0; i < Total; i++) { CvToolbox.Memcpy(address, CvInvoke.cvGetSeqElem(_ptr, i), _sizeOfElement); yield return(buffer.Array[0]); //yield return (T)Marshal.PtrToStructure(CvInvoke.cvGetSeqElem(_ptr, i), typeof(T)); //yield return this[i]; } } }
public static Lab[] ToLabPalette <TColor>(TColor[] palette) where TColor : struct, IColor { Lab[] labPalette = null; try { // Try direct conversion CvToolbox.GetColorCvtCode(typeof(TColor), typeof(Lab)); labPalette = ColorConversion.ConvertColors <TColor, Lab>(palette); } catch { // Indirect conversion (converting first to Rgb) Rgb[] tempPalette = ColorConversion.ConvertColors <TColor, Rgb>(palette); labPalette = ColorConversion.ConvertColors <Rgb, Lab>(tempPalette); } return(labPalette); }
/* * private static int CompareSimilarFeature(SimilarFeature f1, SimilarFeature f2) * { * if (f1.Distance < f2.Distance) * return -1; * if (f1.Distance == f2.Distance) * return 0; * else * return 1; * }*/ /// <summary> /// Match the Image feature from the observed image to the features from the model image /// </summary> /// <param name="observedFeatures">The Image feature from the observed image</param> /// <param name="k">The number of neighbors to find</param> /// <param name="emax">For k-d tree only: the maximum number of leaves to visit.</param> /// <returns>The matched features</returns> public MatchedImageFeature[] MatchFeature(ImageFeature[] observedFeatures, int k, int emax) { if (observedFeatures.Length == 0) { return(new MatchedImageFeature[0]); } float[][] descriptors = new float[observedFeatures.Length][]; for (int i = 0; i < observedFeatures.Length; i++) { descriptors[i] = observedFeatures[i].Descriptor; } using (Matrix <int> result1 = new Matrix <int>(descriptors.Length, k)) using (Matrix <float> dist1 = new Matrix <float>(descriptors.Length, k)) { _modelIndex.KnnSearch(CvToolbox.GetMatrixFromDescriptors(descriptors), result1, dist1, k, emax); int[,] indexes = result1.Data; float[,] distances = dist1.Data; MatchedImageFeature[] res = new MatchedImageFeature[observedFeatures.Length]; List <SimilarFeature> matchedFeatures = new List <SimilarFeature>(); for (int i = 0; i < res.Length; i++) { matchedFeatures.Clear(); for (int j = 0; j < k; j++) { int index = indexes[i, j]; if (index >= 0) { matchedFeatures.Add(new SimilarFeature(distances[i, j], _modelFeatures[index])); } } res[i].ObservedFeature = observedFeatures[i]; res[i].SimilarFeatures = matchedFeatures.ToArray(); } return(res); } }
public void TestCudaImageAsyncOps() { if (CudaInvoke.HasCuda) { int counter = 0; Stopwatch watch = Stopwatch.StartNew(); using (GpuMat img1 = new GpuMat(3000, 2000, DepthType.Cv8U, 3)) using (GpuMat img2 = new GpuMat(3000, 2000, DepthType.Cv8U, 3)) using (GpuMat img3 = new GpuMat()) using (Stream stream = new Stream()) using (GpuMat mat1 = new GpuMat()) { img1.ConvertTo(mat1, DepthType.Cv8U, 1, 0, stream); while (!stream.Completed) { if (counter <= int.MaxValue) { counter++; } } Trace.WriteLine(String.Format("Counter has been incremented {0} times", counter)); counter = 0; CudaInvoke.CvtColor(img2, img3, CvToolbox.GetColorCvtCode(typeof(Bgr), typeof(Gray)), 1, stream); while (!stream.Completed) { if (counter <= int.MaxValue) { counter++; } } Trace.WriteLine(String.Format("Counter has been incremented {0} times", counter)); } watch.Stop(); Trace.WriteLine(String.Format("Total time: {0} milliseconds", watch.ElapsedMilliseconds)); } }
/// <summary> /// Create a Matrix (only header is allocated) using the Pinned/Unmanaged <paramref name="data"/>. The <paramref name="data"/> is not freed by the disposed function of this class /// </summary> /// <param name="rows">The number of rows</param> /// <param name="cols">The number of cols</param> /// <param name="channels">The number of channels</param> /// <param name="data">The Pinned/Unmanaged data, the data must not be release before the Matrix is Disposed</param> /// <param name="step">The step (row stride in bytes)</param> /// <remarks>The caller is responsible for allocating and freeing the block of memory specified by the data parameter, however, the memory should not be released until the related Matrix is released. </remarks> public Matrix(int rows, int cols, int channels, IntPtr data, int step) { AllocateHeader(); CvInvoke.cvInitMatHeader(_ptr, rows, cols, CvInvoke.CV_MAKETYPE((int)CvToolbox.GetMatrixDepth(typeof(TDepth)), channels), data, step); }
/// <summary> /// Create a GpuMat of the specified size /// </summary> /// <param name="rows">The number of rows (height)</param> /// <param name="cols">The number of columns (width)</param> /// <param name="channels">The number of channels</param> public GpuMat(int rows, int cols, int channels) { _ptr = GpuInvoke.GpuMatCreate(rows, cols, CvInvoke.CV_MAKETYPE((int)CvToolbox.GetMatrixDepth(typeof(TDepth)), channels)); }
/// <summary> /// Finds (with high probability) the k nearest neighbors in tree for each of the given (row-)vectors in desc, using best-bin-first searching ([Beis97]). The complexity of the entire operation is at most O(m*emax*log2(n)), where n is the number of vectors in the tree /// </summary> /// <param name="descriptors">The m feature descriptors to be searched from the feature tree</param> /// <param name="results"> /// The results of the best <paramref name="k"/> matched from the feature tree. A m x <paramref name="k"/> matrix. Contains -1 in some columns if fewer than k neighbors found. /// For each row the k neareast neighbors are not sorted. To findout the closet neighbour, look at the output matrix <paramref name="dist"/>. /// </param> /// <param name="dist"> /// A m x <paramref name="k"/> Matrix of the distances to k nearest neighbors /// </param> /// <param name="k">The number of neighbors to find</param> /// <param name="emax">For k-d tree only: the maximum number of leaves to visit. Use 20 if not sure</param> private void FindFeatures(float[][] descriptors, Matrix <Int32> results, Matrix <double> dist, int k, int emax) { using (Matrix <float> descriptorMatrix = CvToolbox.GetMatrixFromDescriptors(descriptors)) CvInvoke.cvFindFeatures(Ptr, descriptorMatrix.Ptr, results.Ptr, dist.Ptr, k, emax); }
/// <summary> /// Create a sparse matrix of the specific dimension /// </summary> /// <param name="dimension">The dimension of the sparse matrix</param> public SparseMatrix(int[] dimension) { _dimension = new int[dimension.Length]; Array.Copy(dimension, _dimension, dimension.Length); GCHandle handle = GCHandle.Alloc(_dimension, GCHandleType.Pinned); _ptr = CvInvoke.cvCreateSparseMat(_dimension.Length, handle.AddrOfPinnedObject(), CvToolbox.GetMatrixDepth(typeof(TDepth))); handle.Free(); }
/// <summary> /// Create a k-d tree from the specific feature descriptors /// </summary> /// <param name="descriptors">The array of feature descriptors</param> public FeatureTree(float[][] descriptors) { _descriptorMatrix = CvToolbox.GetMatrixFromDescriptors(descriptors); _ptr = CvInvoke.cvCreateKDTree(_descriptorMatrix.Ptr); }
/// <summary> /// Create a spill tree from the specific feature descriptors /// </summary> /// <param name="descriptors">The array of feature descriptors</param> /// <param name="naive">A good value is 50</param> /// <param name="rho">A good value is .7</param> /// <param name="tau">A good value is .1</param> public FeatureTree(float[][] descriptors, int naive, double rho, double tau) { _descriptorMatrix = CvToolbox.GetMatrixFromDescriptors(descriptors); _ptr = CvInvoke.cvCreateSpillTree(_descriptorMatrix.Ptr, naive, rho, tau); }