private void imageViewer1_RoiChanged(object sender, NationalInstruments.Vision.WindowsForms.ContoursChangedEventArgs e) { if (imageViewer1.Roi.Count > 0 && imageViewer1.Roi[0].Type == ContourType.Line) { // Fill in the edge options structure from the controls on the form. EdgeOptions options = new EdgeOptions(); options.ColumnProcessingMode = (ColumnProcessingMode)Enum.Parse(typeof(ColumnProcessingMode), (string)smoothing.SelectedItem); options.InterpolationType = (InterpolationMethod)Enum.Parse(typeof(InterpolationMethod), (string)interpolationMethod.SelectedItem); options.KernelSize = (uint)kernelSize.Value; options.MinimumThreshold = (uint)minimumThreshold.Value; options.Polarity = (EdgePolaritySearchMode)Enum.Parse(typeof(EdgePolaritySearchMode), (string)polarity.SelectedItem); options.Width = (uint)width.Value; // Run the edge detection EdgeReport report = Algorithms.EdgeTool(imageViewer1.Image, imageViewer1.Roi, (EdgeProcess)Enum.Parse(typeof(EdgeProcess), (string)process.SelectedItem), options); // Overlay the edge positions imageViewer1.Image.Overlays.Default.Clear(); foreach (EdgeInfo edge in report.Edges) { imageViewer1.Image.Overlays.Default.AddOval(new OvalContour(edge.Position.X - 2, edge.Position.Y - 2, 5, 5), Rgb32Value.RedColor, DrawingMode.PaintValue); } // Graph the line profile and the gradient of the line. LineProfileReport lineProfile = Algorithms.LineProfile(imageViewer1.Image, imageViewer1.Roi); double[] profileData = new double[lineProfile.ProfileData.Count]; lineProfile.ProfileData.CopyTo(profileData, 0); double[] gradientData = new double[report.GradientInfo.Count]; report.GradientInfo.CopyTo(gradientData, 0); edgeDetectionGraph1.PlotData(profileData, gradientData); } }
private void imageViewer1_RoiChanged(object sender, NationalInstruments.Vision.WindowsForms.ContoursChangedEventArgs e) { // Enable the Test Parts button if at least one region is selected. testPartsButton.Enabled = imageViewer1.Roi.Count > 0; // Update the target display. for (int i = 0; i < imageViewer1.Roi.Count; ++i) { // Enable the indicators. GetExpectedTextBox(i).Enabled = true; GetActualTextBox(i).Enabled = true; GetPassFailLed(i).Enabled = true; GetPassFailLed(i).LedColorMode = PassFailLed.ColorMode.RedGreen; // Find the number of edges along the line. This is the expected // number of edges. EdgeReport report = Algorithms.EdgeTool(imageViewer1.Image, new Roi(new Shape[] { imageViewer1.Roi[i].Shape }), EdgeProcess.All, new EdgeOptions(EdgePolaritySearchMode.All, 60)); GetExpectedTextBox(i).Text = report.Edges.Count.ToString(); } for (int i = imageViewer1.Roi.Count; i < 5; ++i) { // Disable the indicators. GetExpectedTextBox(i).Text = "0"; GetExpectedTextBox(i).Enabled = false; GetActualTextBox(i).Text = "0"; GetActualTextBox(i).Enabled = false; GetPassFailLed(i).Enabled = false; GetPassFailLed(i).LedColorMode = PassFailLed.ColorMode.RedGray; } }
private void timer1_Tick(object sender, EventArgs e) { // Get the next image. VisionImage image = GetNextImage(); // Find the actual number of edges along each line in the image. bool allTargetsPassed = true; for (int i = 0; i < testLines.Count; ++i) { EdgeReport report = Algorithms.EdgeTool(image, new Roi(new Shape[] { testLines[i] }), EdgeProcess.All, new EdgeOptions(EdgePolaritySearchMode.All, 60)); // Display the results. GetActualTextBox(i).Text = report.Edges.Count.ToString(); foreach (EdgeInfo ei in report.Edges) { PointContour pt = ei.Position; image.Overlays.Default.AddOval(new OvalContour(pt.X - 2, pt.Y - 2, 5, 5), Rgb32Value.YellowColor, DrawingMode.PaintValue); } if (Int32.Parse(GetExpectedTextBox(i).Text) == report.Edges.Count) { // Part passes. GetPassFailLed(i).Value = true; image.Overlays.Default.AddLine(testLines[i], Rgb32Value.GreenColor); } else { // Part fails. GetPassFailLed(i).Value = false; image.Overlays.Default.AddLine(testLines[i], Rgb32Value.RedColor); } allTargetsPassed = allTargetsPassed && GetPassFailLed(i).Value; } globalPassFailLed.Value = allTargetsPassed; imageViewer1.Attach(image); }
private void measureDistancesButton_Click(object sender, EventArgs e) { // In a point-based measurement, it is not necessary to correct the image. The // coordinates of the fiducial points can be found and transformed to real-world // coordinates. All point measurements can then be performed on these // coordinates. // Load the Distortion Target image. testImage.ReadFile(System.IO.Path.Combine(ExampleImagesFolder.GetExampleImagesFolder(), "Distortion target.tif")); // Copy the calibration information from the calibrated image to the test image. Algorithms.CopyCalibrationInformation(calibrationTemplate, testImage); // Initialize the two search lines. Collection <LineContour> searchLines = new Collection <LineContour>(); searchLines.Add(new LineContour(new PointContour(150, 250), new PointContour(490, 248.57))); searchLines.Add(new LineContour(new PointContour(318, 165), new PointContour(319.62, 340))); foreach (Direction d in new Direction[] { Direction.Horizontal, Direction.Vertical }) { // Detect the edges along a line and convert the coordinates of the edge points // to real-world coordinates. EdgeReport report = Algorithms.EdgeTool(testImage, new Roi(new Shape[] { searchLines[d == Direction.Horizontal ? 0 : 1] }), EdgeProcess.All); Collection <PointContour> pixelPoints = new Collection <PointContour>(); Collection <PointContour> realWorldPoints = new Collection <PointContour>(); foreach (EdgeInfo info in report.Edges) { pixelPoints.Add(info.Position); realWorldPoints.Add(info.CalibratedPosition); } Collection <PointContour> realWorldPoints2 = Algorithms.ConvertPixelToRealWorldCoordinates(testImage, pixelPoints).Points; MeasureDistortionTarget(testImage, pixelPoints, realWorldPoints, d); } imageViewer1.Attach(testImage); }