public unsafe Kalman() { // cvKalmanPredict, cvKalmanCorrect // カルマンフィルタを用いて回転する点を追跡する // A matrix data float[] A = new float[] { 1, 1, 0, 1 }; using (IplImage img = new IplImage(500, 500, BitDepth.U8, 3)) using (CvKalman kalman = new CvKalman(2, 1, 0)) using (CvWindow window = new CvWindow("Kalman", WindowMode.AutoSize)) { // state is (phi, delta_phi) - angle and angle increment CvMat state = new CvMat(2, 1, MatrixType.F32C1); CvMat process_noise = new CvMat(2, 1, MatrixType.F32C1); // only phi (angle) is measured CvMat measurement = new CvMat(1, 1, MatrixType.F32C1); measurement.SetZero(); CvRandState rng = new CvRandState(0, 1, -1, DistributionType.Uniform); int code = -1; for (; ;) { Cv.RandSetRange(rng, 0, 0.1, 0); rng.DistType = DistributionType.Normal; Marshal.Copy(A, 0, kalman.TransitionMatrix.Data, A.Length); kalman.MeasurementMatrix.SetIdentity(1); kalman.ProcessNoiseCov.SetIdentity(1e-5); kalman.MeasurementNoiseCov.SetIdentity(1e-1); kalman.ErrorCovPost.SetIdentity(1); // choose random initial state Cv.Rand(rng, kalman.StatePost); rng.DistType = DistributionType.Normal; for (; ;) { float state_angle = state.DataSingle[0]; CvPoint state_pt = CalcPoint(img, state_angle); // predict point position CvMat prediction = kalman.Predict(null); float predict_angle = prediction.DataSingle[0]; CvPoint predict_pt = CalcPoint(img, predict_angle); Cv.RandSetRange(rng, 0, Math.Sqrt(kalman.MeasurementNoiseCov.DataSingle[0]), 0); Cv.Rand(rng, measurement); // generate measurement Cv.MatMulAdd(kalman.MeasurementMatrix, state, measurement, measurement); float measurement_angle = measurement.DataArraySingle[0]; CvPoint measurement_pt = CalcPoint(img, measurement_angle); img.SetZero(); DrawCross(img, state_pt, CvColor.White, 3); DrawCross(img, measurement_pt, CvColor.Red, 3); DrawCross(img, predict_pt, CvColor.Green, 3); img.Line(state_pt, measurement_pt, new CvColor(255, 0, 0), 3, LineType.AntiAlias, 0); img.Line(state_pt, predict_pt, new CvColor(255, 255, 0), 3, LineType.AntiAlias, 0); // adjust Kalman filter state kalman.Correct(measurement); Cv.RandSetRange(rng, 0, Math.Sqrt(kalman.ProcessNoiseCov.DataSingle[0]), 0); Cv.Rand(rng, process_noise); Cv.MatMulAdd(kalman.TransitionMatrix, state, process_noise, state); window.ShowImage(img); // break current simulation by pressing a key code = CvWindow.WaitKey(100); if (code > 0) { break; } } // exit by ESCAPE if (code == 27) { break; } } } }
// Use the CamShift algorithm to track to base histogram throughout the // succeeding frames void CalculateCamShift(CvMat _image) { CvMat _backProject = CalculateBackProjection(_image, _histogramToTrack); // Create convolution kernel for erosion and dilation IplConvKernel elementErode = Cv.CreateStructuringElementEx(10, 10, 5, 5, ElementShape.Rect, null); IplConvKernel elementDilate = Cv.CreateStructuringElementEx(4, 4, 2, 2, ElementShape.Rect, null); // Try eroding and then dilating the back projection // Hopefully this will get rid of the noise in favor of the blob objects. Cv.Erode(_backProject, _backProject, elementErode, 1); Cv.Dilate(_backProject, _backProject, elementDilate, 1); if (backprojWindowFlag) { Cv.ShowImage("Back Projection", _backProject); } // Parameters returned by Camshift algorithm CvBox2D _outBox; CvConnectedComp _connectComp; // Set the criteria for the CamShift algorithm // Maximum 10 iterations and at least 1 pixel change in centroid CvTermCriteria term_criteria = Cv.TermCriteria(CriteriaType.Iteration | CriteriaType.Epsilon, 10, 1); // Draw object center based on Kalman filter prediction CvMat _kalmanPrediction = _kalman.Predict(); int predictX = Mathf.FloorToInt((float)_kalmanPrediction.GetReal2D(0, 0)); int predictY = Mathf.FloorToInt((float)_kalmanPrediction.GetReal2D(1, 0)); // Run the CamShift algorithm if (Cv.CamShift(_backProject, _rectToTrack, term_criteria, out _connectComp, out _outBox) > 0) { // Use the CamShift estimate of the object center to update the Kalman model CvMat _kalmanMeasurement = Cv.CreateMat(2, 1, MatrixType.F32C1); // Update Kalman model with raw data from Camshift estimate _kalmanMeasurement.Set2D(0, 0, _outBox.Center.X); // Raw X position _kalmanMeasurement.Set2D(1, 0, _outBox.Center.Y); // Raw Y position //_kalmanMeasurement.Set2D (2, 0, _outBox.Center.X - lastPosition.X); //_kalmanMeasurement.Set2D (3, 0, _outBox.Center.Y - lastPosition.Y); lastPosition.X = Mathf.FloorToInt(_outBox.Center.X); lastPosition.Y = Mathf.FloorToInt(_outBox.Center.Y); _kalman.Correct(_kalmanMeasurement); // Correct Kalman model with raw data // CamShift function returns two values: _connectComp and _outBox. // _connectComp contains is the newly estimated position and size // of the region of interest. This is passed into the subsequent // call to CamShift // Update the ROI rectangle with CamShift's new estimate of the ROI _rectToTrack = CheckROIBounds(_connectComp.Rect); // Draw a rectangle over the tracked ROI // This method will draw the rectangle but won't rotate it. _image.DrawRect(_rectToTrack, CvColor.Aqua); _image.DrawMarker(predictX, predictY, CvColor.Aqua); // _outBox contains a rotated rectangle esimating the position, size, and orientation // of the object we want to track (specified by the initial region of interest). // We then take this estimation and draw a rotated bounding box. // This method will draw the rotated rectangle rotatedBoxToTrack = _outBox; // Draw a rotated rectangle representing Camshift's estimate of the // object's position, size, and orientation. _image.DrawPolyLine(rectangleBoxPoint(_outBox.BoxPoints()), true, CvColor.Red); } else { //Debug.Log ("Object lost by Camshift tracker"); _image.DrawMarker(predictX, predictY, CvColor.Purple, MarkerStyle.CircleLine); _rectToTrack = CheckROIBounds(new CvRect(predictX - Mathf.FloorToInt(_rectToTrack.Width / 2), predictY - Mathf.FloorToInt(_rectToTrack.Height / 2), _rectToTrack.Width, _rectToTrack.Height)); _image.DrawRect(_rectToTrack, CvColor.Purple); } if (trackWindowFlag) { Cv.ShowImage("Image", _image); } }