在zlmediakit源码基础上继续探索扩展支持算法分析功能。参照上一篇帖子:https://www.cnblogs.com/feixiang-energy/p/17623567.html
算法模型使用opencv自带的人脸检测库:https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml
zlmediakit提供接口定时抽取视频帧进行算法分析,检测到视频画面中有人脸时,进行人脸区域绘制并保存图片。通过webhook的方式将算法分析结果回调给业务程序。
-----------------------------------------------------
关键代码如下:
1.cv::Mat FFmpegMuxer::avframeToCvmat。新增一个AvFrame到CV::Mat的转换函数,将ffmpeg解码后的Yuv图像转换成opencv对应的格式
cv::Mat FFmpegMuxer::avframeToCvmat(const AVFrame *frame) { int width = frame->width; int height = frame->height; cv::Mat image(height, width, CV_8UC3); int cvLinesizes[1]; cvLinesizes[0] = image.step1(); SwsContext *conversion = sws_getContext( width, height, (AVPixelFormat)frame->format, width, height, AVPixelFormat::AV_PIX_FMT_BGR24, SWS_FAST_BILINEAR, NULL, NULL, NULL); sws_scale(conversion, frame->data, frame->linesize, 0, height, &image.data, cvLinesizes); sws_freeContext(conversion); return image; }
2.FFmpegMuxer::addTrack。修改addTrack函数,设置视频解码器,视频解码回调的数据进行算法识别,识别结果通过"NoticeCenter"机制广播出去。
bool FFmpegMuxer::addTrack(const Track::Ptr &track) { if (track->getTrackType() == TrackVideo) { _video_dec.reset(new FFmpegDecoder(track)); /* // 设置H264编码器 H264Track::Ptr newTrack(new H264Track()); VideoTrack::Ptr video = static_pointer_cast<VideoTrack>(track); newTrack->setVideoWidth(video->getVideoWidth()); newTrack->setVideoHeight(video->getVideoHeight()); newTrack->setBitRate(video->getBitRate()); _video_enc.reset(new FFmpegEncoder(newTrack)); _video_enc->setOnEncode([this](const Frame::Ptr &frame) { if (_cb) { _cb(frame); } }); */ _video_dec->setOnDecode([this](const FFmpegFrame::Ptr &frame) { /* 转码操作 // 将解码后的视频帧写入到编码器重新编码 _video_enc->inputFrame(frame, false); */ /* // --- 抽帧操作 begin time_t now = ::time(NULL); if (now - _last_time >= _gapTime) { AVFrame *avFrame = frame->get(); int bufSize = av_image_get_buffer_size(AV_PIX_FMT_BGRA, avFrame->width, avFrame->height, 64); uint8_t *buf = (uint8_t *)av_malloc(bufSize); int picSize = frameToImage(avFrame, AV_CODEC_ID_MJPEG, buf, bufSize); if (picSize > 0) { auto file_path = _folder_path + getTimeStr("%H-%M-%S_") + std::to_string(_index) + ".jpeg"; auto f = fopen(file_path.c_str(), "wb+"); if (f) { fwrite(buf, sizeof(uint8_t), bufSize, f); fclose(f); } } av_free(buf); _index++; _last_time = now; } // --- 抽帧操作 end */ // 算法分析 time_t now = ::time(NULL); if (now - _last_time >= _gapTime) { AVFrame *avFrame = frame->get(); cv::Mat img = avframeToCvmat(avFrame); vector<cv::Rect> faces; if (!img.empty()) { // 加载OpneCV人脸检测算法模型 cv::CascadeClassifier faceCascade; faceCascade.load("model/haarcascade_frontalface_default.xml"); if (faceCascade.empty()) { LogW("加载算法模型失败"); return; } // 检测是否包含人脸 faceCascade.detectMultiScale(img, faces, 1.1, 10); } if (faces.size() > 0) { // 绘制人脸线框 for (int i = 0; i < faces.size(); i++) { rectangle(img, faces[i].tl(), faces[i].br(), cv::Scalar(255, 0, 255), 3); } // 保存识别结果图片 auto pic_path = _folder_path + getTimeStr("%H-%M-%S_") + std::to_string(_index) + ".jpeg"; cv::imwrite(pic_path.c_str(), img); _index++; // 检测到人脸 int timestamp = int(::time(NULL)); std::string message = "检测到人脸"; NoticeCenter::Instance().emitEvent(Broadcast::kBroadcastAiEvent, timestamp, message, pic_path); } _last_time = now; } }); } return true; }
3.在config.h中定义广播事件和参数定义
// 智能AI分析事件广播 extern const std::string kBroadcastAiEvent; #define BroadcastAiEventArgs const int ×tamp, const std::string &message, const std::string &picPath
4.在webhook.cpp中增加一个监听器,监听Broadcast::kBroadcastAiEvent事件。并执行webhook的http回调
void installWebHook(){ ***** //AI智能分析事件广播 NoticeCenter::Instance().addListener(&web_hook_tag, Broadcast::kBroadcastAiEvent, [](BroadcastAiEventArgs) { GET_CONFIG(string, hook_ai_event, Hook::kOnAiEvent); if (!hook_enable || hook_ai_event.empty()) { return; } ArgsType body; body["timestamp"] = timestamp; body["message"] = message; body["picPath"] = picPath; //执行hook do_http_hook(hook_ai_event, body, nullptr); }); ***** }
标签:zlmediakit,int,frame,算法,源码,video,time,const From: https://www.cnblogs.com/feixiang-energy/p/17656212.html