Feature h264 videostorage (#1882)

* Moved writing of configure options from Controller to Model.  Fixes #191.

* Initial commit for saving events as videos :)

* Add zm_video.cpp to autotools

* Add zm_video.h to autotools

* Search for MP4V2 header file 3 times: mp4v2/mp4v2.h, mp4v2.h, mp4.h

* Fix serve memory leak

* Few minor code improvements

* Added the ability to override preset, tune, profile and few other improvements

* Correctly write SPS & PPS from x264 encoder headers

* Remove unnessecary SPS & PPS writing code

* Imported missing files from master to feature-h264-videostorage

* Audio support including fixes for dts/pts, split on keyframe and update to mkv extension to prevent ffmpeg problems writing rtsp audio to mp4 containter (header problem)

* Updates to make gcc happy

* Add html5 video control to timeline and event to support mkv playback

* Add zm_videostore.cpp to CMakeLists.txt

* Remove Modern Branch for now

* Fix minor bug

* Option handled added in master, removing duplicate declaration

* Add CaptureandRecord from zm_camera.h

* Putting placeholder in for CaptureAndRecord function

* Removed duplicate code and brackets

* add digest auth file for cmake

Conflicts:
	src/CMakeLists.txt

* Add web dir back into Makefile.am
Revert "Removed web from SUBDIRS in Makefile.am"

This reverts commit d9bbcdf3a9.

* Add CaptureAndRecord to vlc, still need to make it record

* Resolve SegFault on videostore

* Swap to mp4 container

* mp4 changes

* spaces to tabs, hide video stuff if video writer is turned off

* Make timeline open event.mp4 instead of mkv

* Missed mkv in timeline.js

* Fix some issues from the merge conflict

* Resolve post merge build issues with braces

* Fix whitespace

* Update Jpeg and Video options for passthrough options

* Whitespace fix zm_camera.h

* Fix array mkssing comma

* Add support for Jpeg save options for h264 branch snapshot. Might remove altogether if snapshots not needed

* Update VideoStoreData memory size comment

* Change from config.use_mkv_storage to per monitor option VideoWriter from video branch

* Fix bracket issues post merge

* Clean up comments and add av_free_packet

* Convert from event_directory to event file as per Video branch

* Testing videojs for video playback

* Fixed a missing bracket post merge and also SQL_values now used for EventID and Monitors

* bring recent improvements in ffmpeg capture function into captureandrecord

* Remove pict from writeAudioFramePacket as not used

* Add translate options for h264 Storage options in Monitor and update en_gb file

* Cherry-Pick from iconnor - make it compile on ubuntu 15.04.  Which is libav 56.1.0

Conflicts:
	src/zm_ffmpeg.cpp
	src/zm_remote_camera_rtsp.cpp

Conflicts:
	distros/ubuntu1204/changelog

* Clean up videostore code and remove lots of unused code

* proof of concept for dynamic/automatic video rotation using video-js plugin zoomrotate

Conflicts:
	web/skins/classic/views/event.php

* removed redundant field in sql query

Conflicts:
	web/skins/classic/views/event.php

* local storage of video js plugin

* Beautify!

Make the code somewhat readable.

* added missing videojs.zoomrotate.js file

added missing videojs.zoomrotate.js file

* Typo

added missing "

* Added missing brackets

* fix to display thumbnails when only storing snapshot.jpg

* added control for video playback rate

Conflicts:
	web/skins/classic/views/event.php

* dynamically create jpegs from video file for viewing in browser

* fix timeline view for SaveJPEGs monitors (without enabled VideoWriter)

* only expose monitor info which are being used in client

* fix segmentation fault in zma with ubuntu 14.04 and ffmpeg 2.5.8 (gcc 4.8)

when libx264 is not installed

* better way of detecting showing image or video in timeline and event view

instead of Monitor.VideoWriter, Event.DefaultVideo is used, so even if
VideoWriter/SaveJPEG option is changed, a valid image or video will always be
displayed for historical events in both timeline and event view

this also fixes loading videos in timeline view

* Fixes problem of crashing zmc when bad packet arrives causing av_interleaved_write_frame() to return non-zero (-22).  Prefilters common packet issues. Add metadata title to generated video file

* Remove syslog.h

* fixed SaveJPEGs are not working

which is caused in errors introduced when merging with master

* Update README.md

* Fix build warnings specific to h264 branch, unused FrameImg, unused ret and int64_t snprintf issues

* Fix PRId64 issue in travis, builds locally fine, but I can see a gcc version issue here

* Fix PRId64 issue in travis, another try

* Try "STDC_FORMAT_MACROS" to see if that helps Travis on gcc 4.6.3

* Revert space removal around PRId64

* video branch ffmpeg 2.9 fixes

ffmpeg 2.9 patched removed SSE2 CPU

* Add FFMPEGInit back

* use webvvt to overlay timestamp (honoring Monitor.LabelFormat) to videos in timeline and event

also fixed bug which prevented seeking in timeline video preview

* ffmpeg 3.0 API build failure fixes

* Update README.md

* merge all the commits from the messed up iconnor_video branch

* fix whitespace

* revert

* whitespace fixes

* spelling fix

* put back some text

* add these back

* fix spelling mistake

* Steal some packet dumping routines from ffmpeg. Convert them to use our logging routines

* add a test and error message if the codec is not h264

* these have been removed in master

* add a view to check auth and just send the video

* add some comments, and dump filename and AVFormatContext on failure to write header

* add the toggle for RecordAudio so that the checkbox works to turn off Audio

* Must init videoStore in constuctor

* more debug and comments, return checking

* Fix dropped part of sql query.

* fix extra else and some whitespace

* Fix missing } from merge that was preventing building.

* fix tabs

* get rid of use of separator, just use \n

* Restore lost fixes for deprecation

* Why are these failing

* Respect record_audio flag when setting up video file so dont try and initiliase mp4 with unsupported audio

* Forgot that I was trying to solve case of stream is true and record_audio
is false.

* Pass swscale_ctx back in to getCachedContext or it will create new
context every frame and leak memory like a mofo.

* Add libx264-dev and libmp4v2-dev to build requires to save hassle of
ensuring they are installed before build.

* Merge my Rotation/Orientation work and fixes for bad h264 streams

* need arpa/inet for reverse lookups

* pull in the new byte range code for viewing videos

* Move our recording flag deeper into closeevent

* add braces and only call closeEvent if there is an event

* deprecate the z_frame_rate stuff which is deprecated in ffmpeg

* remark out some debugging

* fix for video on stream 1

* fix audio_stream to audio_st

* Ignore bad decodes

* fix problems with content-length causing viewing to not work in chrome/android

* change logic of sending file contents to handle an off by one and be more readable

* Some fixes pointed out by Maxim Romanov.  Also simply the loading of events to not join the Monitors table

* fix to sql for timeline

* added RecordAudio to sql in README

* Use sub queries instead of joins to fix errors when using new mysql defaults.

* fix sql queries

* Dockerfile to build feature-h264-videostorage

* Must cast codec

* add php-acpu as a dependency

* require php5-acpu

* fix typo

* remove extra /

* Add a line for out-of-tree builds to do api/lib/Cake/bootstrap.php

* delete merge conflict files

* delete merge conflict files
This commit is contained in:
Isaac Connor 2017-05-15 22:02:48 -04:00 committed by GitHub
parent 33092e4022
commit c859f7291c
61 changed files with 4045 additions and 1031 deletions

View File

@ -428,6 +428,59 @@ else(MYSQLCLIENT_LIBRARIES)
"ZoneMinder requires mysqlclient but it was not found on your system") "ZoneMinder requires mysqlclient but it was not found on your system")
endif(MYSQLCLIENT_LIBRARIES) endif(MYSQLCLIENT_LIBRARIES)
# x264 (using find_library and find_path)
find_library(X264_LIBRARIES x264)
if(X264_LIBRARIES)
set(HAVE_LIBX264 1)
list(APPEND ZM_BIN_LIBS "${X264_LIBRARIES}")
find_path(X264_INCLUDE_DIR x264.h)
if(X264_INCLUDE_DIR)
include_directories("${X264_INCLUDE_DIR}")
set(CMAKE_REQUIRED_INCLUDES "${X264_INCLUDE_DIR}")
endif(X264_INCLUDE_DIR)
mark_as_advanced(FORCE X264_LIBRARIES X264_INCLUDE_DIR)
check_include_files("stdint.h;x264.h" HAVE_X264_H)
set(optlibsfound "${optlibsfound} x264")
else(X264_LIBRARIES)
set(optlibsnotfound "${optlibsnotfound} x264")
endif(X264_LIBRARIES)
# mp4v2 (using find_library and find_path)
find_library(MP4V2_LIBRARIES mp4v2)
if(MP4V2_LIBRARIES)
set(HAVE_LIBMP4V2 1)
list(APPEND ZM_BIN_LIBS "${MP4V2_LIBRARIES}")
# mp4v2/mp4v2.h
find_path(MP4V2_INCLUDE_DIR mp4v2/mp4v2.h)
if(MP4V2_INCLUDE_DIR)
include_directories("${MP4V2_INCLUDE_DIR}")
set(CMAKE_REQUIRED_INCLUDES "${MP4V2_INCLUDE_DIR}")
endif(MP4V2_INCLUDE_DIR)
check_include_file("mp4v2/mp4v2.h" HAVE_MP4V2_MP4V2_H)
# mp4v2.h
find_path(MP4V2_INCLUDE_DIR mp4v2.h)
if(MP4V2_INCLUDE_DIR)
include_directories("${MP4V2_INCLUDE_DIR}")
set(CMAKE_REQUIRED_INCLUDES "${MP4V2_INCLUDE_DIR}")
endif(MP4V2_INCLUDE_DIR)
check_include_file("mp4v2.h" HAVE_MP4V2_H)
# mp4.h
find_path(MP4V2_INCLUDE_DIR mp4.h)
if(MP4V2_INCLUDE_DIR)
include_directories("${MP4V2_INCLUDE_DIR}")
set(CMAKE_REQUIRED_INCLUDES "${MP4V2_INCLUDE_DIR}")
endif(MP4V2_INCLUDE_DIR)
check_include_file("mp4.h" HAVE_MP4_H)
mark_as_advanced(FORCE MP4V2_LIBRARIES MP4V2_INCLUDE_DIR)
set(optlibsfound "${optlibsfound} mp4v2")
else(MP4V2_LIBRARIES)
set(optlibsnotfound "${optlibsnotfound} mp4v2")
endif(MP4V2_LIBRARIES)
set(PATH_FFMPEG "") set(PATH_FFMPEG "")
set(OPT_FFMPEG "no") set(OPT_FFMPEG "no")
# Do not check for ffmpeg if ZM_NO_FFMPEG is on # Do not check for ffmpeg if ZM_NO_FFMPEG is on

View File

@ -1,7 +1,20 @@
ZoneMinder ZoneMinder H264 Patch
==========
[![Build Status](https://travis-ci.org/ZoneMinder/ZoneMinder.png)](https://travis-ci.org/ZoneMinder/ZoneMinder) [![Bountysource](https://api.bountysource.com/badge/team?team_id=204&style=bounties_received)](https://www.bountysource.com/teams/zoneminder/issues?utm_source=ZoneMinder&utm_medium=shield&utm_campaign=bounties_received) [![Build Status](https://travis-ci.org/ZoneMinder/ZoneMinder.png?branch=feature-h264-videostorage)](https://travis-ci.org/ZoneMinder/ZoneMinder) [![Bountysource](https://api.bountysource.com/badge/team?team_id=204&style=bounties_received)](https://www.bountysource.com/teams/zoneminder/issues?utm_source=ZoneMinder&utm_medium=shield&utm_campaign=bounties_received)
##Feature-h264-videostorage Branch Details
This branch supports direct recording of h264 cameras into MP4 format uisng the h264 Passthrough option, but only with FFMPEG Monitors currently. It also provides h264 encoding for any other monitor type. If you encounter any issues, please open an issue on GitHub and attach it to the h264 milestone. But do remember this is bleeding edge so it will have problems.
Thanks to @chriswiggins and @mastertheknife for their work, @SteveGilvarry is now maintaining this branch and welcomes any assistance.
**The following SQL changes are required, these will be merged to zmupdate once we are ready to merge this branch to master.**
```
ALTER TABLE `Monitors` ADD `SaveJPEGs` TINYINT NOT NULL DEFAULT '3' AFTER `Deinterlacing` ,
ADD `VideoWriter` TINYINT NOT NULL DEFAULT '0' AFTER `SaveJPEGs` ,
ADD `EncoderParameters` TEXT NOT NULL AFTER `VideoWriter` ,
ADD `RecordAudio` TINYINT NOT NULL DEFAULT '0' AFTER `EncoderParameters` ;
ALTER TABLE `Events` ADD `DefaultVideo` VARCHAR( 64 ) NOT NULL AFTER `AlarmFrames` ;
```
All documentation for ZoneMinder is now online at https://zoneminder.readthedocs.org All documentation for ZoneMinder is now online at https://zoneminder.readthedocs.org

View File

@ -193,6 +193,7 @@ CREATE TABLE `Events` (
`Length` decimal(10,2) NOT NULL default '0.00', `Length` decimal(10,2) NOT NULL default '0.00',
`Frames` int(10) unsigned default NULL, `Frames` int(10) unsigned default NULL,
`AlarmFrames` int(10) unsigned default NULL, `AlarmFrames` int(10) unsigned default NULL,
`DefaultVideo` VARCHAR( 64 ) NOT NULL,
`TotScore` int(10) unsigned NOT NULL default '0', `TotScore` int(10) unsigned NOT NULL default '0',
`AvgScore` smallint(5) unsigned default '0', `AvgScore` smallint(5) unsigned default '0',
`MaxScore` smallint(5) unsigned default '0', `MaxScore` smallint(5) unsigned default '0',
@ -344,6 +345,10 @@ CREATE TABLE `Monitors` (
`Palette` int(10) unsigned NOT NULL default '0', `Palette` int(10) unsigned NOT NULL default '0',
`Orientation` enum('0','90','180','270','hori','vert') NOT NULL default '0', `Orientation` enum('0','90','180','270','hori','vert') NOT NULL default '0',
`Deinterlacing` int(10) unsigned NOT NULL default '0', `Deinterlacing` int(10) unsigned NOT NULL default '0',
`SaveJPEGs` TINYINT NOT NULL DEFAULT '3' ,
`VideoWriter` TINYINT NOT NULL DEFAULT '0',
`EncoderParameters` TEXT NOT NULL,
`RecordAudio` TINYINT NOT NULL DEFAULT '0',
`RTSPDescribe` tinyint(1) unsigned NOT NULL default '0', `RTSPDescribe` tinyint(1) unsigned NOT NULL default '0',
`Brightness` mediumint(7) NOT NULL default '-1', `Brightness` mediumint(7) NOT NULL default '-1',
`Contrast` mediumint(7) NOT NULL default '-1', `Contrast` mediumint(7) NOT NULL default '-1',

73
db/zm_update-1.29.2.sql Normal file
View File

@ -0,0 +1,73 @@
--
-- This updates a 1.29.0 database to 1.30.0
--
SET @s = (SELECT IF(
(SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Monitors'
AND table_schema = DATABASE()
AND column_name = 'SaveJPEGs'
) > 0,
"SELECT 'Column SaveJPEGs exists in Monitors'",
"ALTER TABLE `Monitors` ADD `SaveJPEGs` TINYINT NOT NULL DEFAULT '3' AFTER `Deinterlacing`"
));
PREPARE stmt FROM @s;
EXECUTE stmt;
SET @s = (SELECT IF(
(SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Monitors'
AND table_schema = DATABASE()
AND column_name = 'VideoWriter'
) > 0,
"SELECT 'Column VideoWriter exists in Monitors'",
"ALTER TABLE `Monitors` ADD `VideoWriter` TINYINT NOT NULL DEFAULT '0' AFTER `SaveJPEGs`"
));
PREPARE stmt FROM @s;
EXECUTE stmt;
SET @s = (SELECT IF(
(SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Monitors'
AND table_schema = DATABASE()
AND column_name = 'EncoderParameters'
) > 0,
"SELECT 'Column EncoderParameters exists in Monitors'",
"ALTER TABLE `Monitors` ADD `EncoderParameters` TEXT NOT NULL AFTER `VideoWriter`"
));
PREPARE stmt FROM @s;
EXECUTE stmt;
SET @s = (SELECT IF(
(SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Events'
AND table_schema = DATABASE()
AND column_name = 'DefaultVideo'
) > 0,
"SELECT 'Column DefaultVideo exists in Events'",
"ALTER TABLE `Events` ADD `DefaultVideo` VARCHAR( 64 ) NOT NULL AFTER `AlarmFrames`"
));
PREPARE stmt FROM @s;
EXECUTE stmt;
SET @s = (SELECT IF(
(SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Monitors'
AND table_schema = DATABASE()
AND column_name = 'RecordAudio'
) > 0,
"SELECT 'Column RecordAudio exists in Monitors'",
"ALTER TABLE `Monitors` ADD `RecordAudio` TINYINT NOT NULL DEFAULT '0' AFTER `EncoderParameters`"
));
PREPARE stmt FROM @s;
EXECUTE stmt;

View File

@ -1,89 +0,0 @@
#!/usr/bin/make -f
# -*- makefile -*-
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
export DEB_BUILD_MAINT_OPTIONS = hardening=+all
export DEB_LDFLAGS_MAINT_APPEND += -Wl,--as-needed
ifeq ($(DEB_BUILD_ARCH_OS),hurd)
ARGS:= -DZM_NO_MMAP=ON
endif
%:
dh $@ --parallel --buildsystem=cmake --builddirectory=dbuild \
--with sphinxdoc,apache2,linktree
override_dh_auto_configure:
dh_auto_configure -- $(ARGS) \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DCMAKE_BUILD_TYPE=Release \
-DZM_CONFIG_DIR="/etc/zm" \
-DZM_RUNDIR="/var/run/zm" \
-DZM_SOCKDIR="/var/run/zm" \
-DZM_TMPDIR="/tmp/zm" \
-DZM_CGIDIR="/usr/lib/zoneminder/cgi-bin" \
-DZM_CONTENTDIR="/var/cache/zoneminder"
override_dh_clean:
dh_clean $(MANPAGES1)
$(RM) -r docs/_build docs/installationguide
build-indep:
#$(MAKE) -C docs text
$(MAKE) -C docs html
MANPAGES1 = dbuild/scripts/zmupdate.pl.1
$(MANPAGES1):
# generate man page(s):
pod2man -s1 --stderr --utf8 $(patsubst %.1, %, $@) $@
## reproducible build:
LAST_CHANGE=$(shell dpkg-parsechangelog -S Date)
BUILD_DATE=$(shell LC_ALL=C date -u "+%B %d, %Y" -d "$(LAST_CHANGE)")
override_dh_installman: $(MANPAGES1)
$(MAKE) -C docs man SPHINXOPTS="-D today=\"$(BUILD_DATE)\""
dh_installman --language=C $(MANPAGES1)
override_dh_auto_install:
dh_auto_install --destdir=$(CURDIR)/debian/tmp
# remove worthless files:
$(RM) -v $(CURDIR)/debian/tmp/usr/share/perl5/*/*/*/.packlist
$(RM) -v $(CURDIR)/debian/tmp/usr/share/perl5/*/*.in
# remove empty directories:
find $(CURDIR)/debian/tmp/usr -type d -empty -delete -printf 'removed %p\n'
# remove extra-license-file:
$(RM) -v $(CURDIR)/debian/tmp/usr/share/zoneminder/www/api/lib/Cake/LICENSE.txt
override_dh_fixperms:
dh_fixperms
#
# As requested by the Debian Webapps Policy Manual §3.2.1
chown root:www-data $(CURDIR)/debian/zoneminder/etc/zm/zm.conf
chmod 640 $(CURDIR)/debian/zoneminder/etc/zm/zm.conf
override_dh_installinit:
dh_installinit --no-start
override_dh_apache2:
dh_apache2 --noenable
override_dh_strip:
[ -d "$(CURDIR)/debian/zoneminder-dbg" ] \
&& dh_strip --dbg-package=zoneminder-dbg \
|| dh_strip
#%:
# dh $@ --parallel --buildsystem=autoconf --with autoreconf
#
#override_dh_auto_configure:
# dh_auto_configure -- \
# --sysconfdir=/etc/zm \
# --with-mysql=/usr \
# --with-webdir=/usr/share/zoneminder \
# --with-ffmpeg=/usr \
# --with-cgidir=/usr/lib/cgi-bin \
# --with-webuser=www-data \
# --with-webgroup=www-data \
# --enable-mmap=yes

View File

@ -1,28 +0,0 @@
--- distros/ubuntu1204/rules
+++ distros/ubuntu1204/rules
@@ -58,8 +58,10 @@ override_dh_auto_install:
override_dh_fixperms:
dh_fixperms
- ## 637685
- chmod -c o-r $(CURDIR)/debian/zoneminder/etc/zm/zm.conf
+ #
+ # As requested by the Debian Webapps Policy Manual §3.2.1
+ chown root:www-data debian/zoneminder-core/etc/zm/zm.conf
+ chmod 640 debian/zoneminder-core/etc/zm/zm.conf
override_dh_installinit:
dh_installinit --no-start
--- distros/ubuntu1204/rules
+++ distros/ubuntu1204/rules
@@ -60,8 +60,8 @@ override_dh_fixperms:
dh_fixperms
#
# As requested by the Debian Webapps Policy Manual §3.2.1
- chown root:www-data debian/zoneminder-core/etc/zm/zm.conf
- chmod 640 debian/zoneminder-core/etc/zm/zm.conf
+ chown root:www-data $(CURDIR)/debian/zoneminder/etc/zm/zm.conf
+ chmod 640 $(CURDIR)/debian/zoneminder/etc/zm/zm.conf
override_dh_installinit:
dh_installinit --no-start

View File

@ -25,7 +25,9 @@ Build-Depends: debhelper (>= 9), dh-systemd, python-sphinx | python3-sphinx, apa
,libphp-serialization-perl ,libphp-serialization-perl
,libsys-mmap-perl [!hurd-any] ,libsys-mmap-perl [!hurd-any]
,libwww-perl ,libwww-perl
,libdata-uuid-perl ,libdata-uuid-perl
,libx264-dev
,libmp4v2-dev
# Unbundled (dh_linktree): # Unbundled (dh_linktree):
,libjs-jquery ,libjs-jquery
,libjs-mootools ,libjs-mootools
@ -60,10 +62,10 @@ Depends: ${shlibs:Depends}, ${misc:Depends}, ${perl:Depends}
,libio-socket-multicast-perl ,libio-socket-multicast-perl
,libdigest-sha-perl ,libdigest-sha-perl
,libsys-cpu-perl, libsys-meminfo-perl ,libsys-cpu-perl, libsys-meminfo-perl
,libdata-uuid-perl ,libdata-uuid-perl
,mysql-client | virtual-mysql-client ,mysql-client | virtual-mysql-client
,perl-modules ,perl-modules
,php5-mysql | php-mysql, php5-gd | php-gd, php-apcu, php-apcu-bc ,php5-mysql | php-mysql, php5-gd | php-gd, php-apcu, php-apcu-bc | php-gd
,policykit-1 ,policykit-1
,rsyslog | system-log-daemon ,rsyslog | system-log-daemon
,zip ,zip

View File

@ -4,7 +4,7 @@
configure_file(zm_config.h.in "${CMAKE_CURRENT_BINARY_DIR}/zm_config.h" @ONLY) configure_file(zm_config.h.in "${CMAKE_CURRENT_BINARY_DIR}/zm_config.h" @ONLY)
# Group together all the source files that are used by all the binaries (zmc, zma, zmu, zms etc) # Group together all the source files that are used by all the binaries (zmc, zma, zmu, zms etc)
set(ZM_BIN_SRC_FILES zm_box.cpp zm_buffer.cpp zm_camera.cpp zm_comms.cpp zm_config.cpp zm_coord.cpp zm_curl_camera.cpp zm.cpp zm_db.cpp zm_logger.cpp zm_event.cpp zm_exception.cpp zm_file_camera.cpp zm_ffmpeg_camera.cpp zm_image.cpp zm_jpeg.cpp zm_libvlc_camera.cpp zm_local_camera.cpp zm_monitor.cpp zm_ffmpeg.cpp zm_mpeg.cpp zm_poly.cpp zm_regexp.cpp zm_remote_camera.cpp zm_remote_camera_http.cpp zm_remote_camera_rtsp.cpp zm_rtp.cpp zm_rtp_ctrl.cpp zm_rtp_data.cpp zm_rtp_source.cpp zm_rtsp.cpp zm_rtsp_auth.cpp zm_sdp.cpp zm_signal.cpp zm_stream.cpp zm_thread.cpp zm_time.cpp zm_timer.cpp zm_user.cpp zm_utils.cpp zm_zone.cpp) set(ZM_BIN_SRC_FILES zm_box.cpp zm_buffer.cpp zm_camera.cpp zm_comms.cpp zm_config.cpp zm_coord.cpp zm_curl_camera.cpp zm.cpp zm_db.cpp zm_logger.cpp zm_event.cpp zm_exception.cpp zm_file_camera.cpp zm_ffmpeg_camera.cpp zm_image.cpp zm_jpeg.cpp zm_libvlc_camera.cpp zm_local_camera.cpp zm_monitor.cpp zm_ffmpeg.cpp zm_mpeg.cpp zm_poly.cpp zm_regexp.cpp zm_remote_camera.cpp zm_remote_camera_http.cpp zm_remote_camera_rtsp.cpp zm_rtp.cpp zm_rtp_ctrl.cpp zm_rtp_data.cpp zm_rtp_source.cpp zm_rtsp.cpp zm_rtsp_auth.cpp zm_sdp.cpp zm_signal.cpp zm_stream.cpp zm_thread.cpp zm_time.cpp zm_timer.cpp zm_user.cpp zm_utils.cpp zm_video.cpp zm_videostore.cpp zm_zone.cpp)
# A fix for cmake recompiling the source files for every target. # A fix for cmake recompiling the source files for every target.
add_library(zm STATIC ${ZM_BIN_SRC_FILES}) add_library(zm STATIC ${ZM_BIN_SRC_FILES})

View File

@ -20,23 +20,24 @@
#include "zm.h" #include "zm.h"
#include "zm_camera.h" #include "zm_camera.h"
Camera::Camera( int p_id, SourceType p_type, int p_width, int p_height, int p_colours, int p_subpixelorder, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ) : Camera::Camera( unsigned int p_monitor_id, SourceType p_type, int p_width, int p_height, int p_colours, int p_subpixelorder, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio ) :
id( p_id ), monitor_id( p_monitor_id ),
type( p_type ), type( p_type ),
width( p_width), width( p_width),
height( p_height ), height( p_height ),
colours( p_colours ), colours( p_colours ),
subpixelorder( p_subpixelorder ), subpixelorder( p_subpixelorder ),
brightness( p_brightness ), brightness( p_brightness ),
hue( p_hue ), hue( p_hue ),
colour( p_colour ), colour( p_colour ),
contrast( p_contrast ), contrast( p_contrast ),
capture( p_capture ) capture( p_capture ),
record_audio( p_record_audio )
{ {
pixels = width * height; pixels = width * height;
imagesize = pixels * colours; imagesize = pixels * colours;
Debug(2,"New camera id: %d width: %d height: %d colours: %d subpixelorder: %d capture: %d",id,width,height,colours,subpixelorder,capture); Debug(2,"New camera id: %d width: %d height: %d colours: %d subpixelorder: %d capture: %d",monitor_id,width,height,colours,subpixelorder,capture);
/* Because many loops are unrolled and work on 16 colours/time or 4 pixels/time, we have to meet requirements */ /* Because many loops are unrolled and work on 16 colours/time or 4 pixels/time, we have to meet requirements */
if((colours == ZM_COLOUR_GRAY8 || colours == ZM_COLOUR_RGB32) && (imagesize % 64) != 0) { if((colours == ZM_COLOUR_GRAY8 || colours == ZM_COLOUR_RGB32) && (imagesize % 64) != 0) {
@ -46,7 +47,11 @@ Camera::Camera( int p_id, SourceType p_type, int p_width, int p_height, int p_co
} }
} }
Camera::~Camera() Camera::~Camera() {
{
} }
Monitor *Camera::getMonitor() {
if ( ! monitor )
monitor = Monitor::Load( monitor_id, false, Monitor::QUERY );
return monitor;
}

View File

@ -25,6 +25,10 @@
#include "zm_image.h" #include "zm_image.h"
class Camera;
#include "zm_monitor.h"
// //
// Abstract base class for cameras. This is intended just to express // Abstract base class for cameras. This is intended just to express
// common attributes // common attributes
@ -34,7 +38,8 @@ class Camera
protected: protected:
typedef enum { LOCAL_SRC, REMOTE_SRC, FILE_SRC, FFMPEG_SRC, LIBVLC_SRC, CURL_SRC } SourceType; typedef enum { LOCAL_SRC, REMOTE_SRC, FILE_SRC, FFMPEG_SRC, LIBVLC_SRC, CURL_SRC } SourceType;
int id; unsigned int monitor_id;
Monitor * monitor; // Null on instantiation, set as soon as possible.
SourceType type; SourceType type;
unsigned int width; unsigned int width;
unsigned int height; unsigned int height;
@ -47,12 +52,14 @@ protected:
int colour; int colour;
int contrast; int contrast;
bool capture; bool capture;
bool record_audio;
public: public:
Camera( int p_id, SourceType p_type, int p_width, int p_height, int p_colours, int p_subpixelorder, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ); Camera( unsigned int p_monitor_id, SourceType p_type, int p_width, int p_height, int p_colours, int p_subpixelorder, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio );
virtual ~Camera(); virtual ~Camera();
int getId() const { return( id ); } unsigned int getId() const { return( monitor_id ); }
Monitor *getMonitor();
SourceType Type() const { return( type ); } SourceType Type() const { return( type ); }
bool IsLocal() const { return( type == LOCAL_SRC ); } bool IsLocal() const { return( type == LOCAL_SRC ); }
bool IsRemote() const { return( type == REMOTE_SRC ); } bool IsRemote() const { return( type == REMOTE_SRC ); }
@ -74,10 +81,13 @@ public:
bool CanCapture() const { return( capture ); } bool CanCapture() const { return( capture ); }
bool SupportsNativeVideo() const { return( (type == FFMPEG_SRC )||(type == REMOTE_SRC)); }
virtual int PrimeCapture() { return( 0 ); } virtual int PrimeCapture() { return( 0 ); }
virtual int PreCapture()=0; virtual int PreCapture()=0;
virtual int Capture( Image &image )=0; virtual int Capture( Image &image )=0;
virtual int PostCapture()=0; virtual int PostCapture()=0;
virtual int CaptureAndRecord( Image &image, bool recording, char* event_directory)=0;
}; };
#endif // ZM_CAMERA_H #endif // ZM_CAMERA_H

View File

@ -30,8 +30,8 @@ const char* content_type_match = "Content-Type:";
size_t content_length_match_len; size_t content_length_match_len;
size_t content_type_match_len; size_t content_type_match_len;
cURLCamera::cURLCamera( int p_id, const std::string &p_path, const std::string &p_user, const std::string &p_pass, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ) : cURLCamera::cURLCamera( int p_id, const std::string &p_path, const std::string &p_user, const std::string &p_pass, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio ) :
Camera( p_id, CURL_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture ), Camera( p_id, CURL_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture, p_record_audio ),
mPath( p_path ), mUser( p_user ), mPass ( p_pass ), bTerminate( false ), bReset( false ), mode ( MODE_UNSET ) mPath( p_path ), mUser( p_user ), mPass ( p_pass ), bTerminate( false ), bReset( false ), mode ( MODE_UNSET )
{ {
@ -311,6 +311,14 @@ int cURLCamera::PostCapture()
return( 0 ); return( 0 );
} }
int cURLCamera::CaptureAndRecord( Image &image, bool recording, char* event_directory )
{
Error("Capture and Record not implemented for the cURL camera type");
// Nothing to do here
return( 0 );
}
size_t cURLCamera::data_callback(void *buffer, size_t size, size_t nmemb, void *userdata) size_t cURLCamera::data_callback(void *buffer, size_t size, size_t nmemb, void *userdata)
{ {
lock(); lock();

View File

@ -65,7 +65,7 @@ protected:
pthread_cond_t request_complete_cond; pthread_cond_t request_complete_cond;
public: public:
cURLCamera( int p_id, const std::string &path, const std::string &username, const std::string &password, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ); cURLCamera( int p_id, const std::string &path, const std::string &username, const std::string &password, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio );
~cURLCamera(); ~cURLCamera();
const std::string &Path() const { return( mPath ); } const std::string &Path() const { return( mPath ); }
@ -79,6 +79,7 @@ public:
int PreCapture(); int PreCapture();
int Capture( Image &image ); int Capture( Image &image );
int PostCapture(); int PostCapture();
int CaptureAndRecord( Image &image, bool recording, char* event_directory);
size_t data_callback(void *buffer, size_t size, size_t nmemb, void *userdata); size_t data_callback(void *buffer, size_t size, size_t nmemb, void *userdata);
size_t header_callback(void *buffer, size_t size, size_t nmemb, void *userdata); size_t header_callback(void *buffer, size_t size, size_t nmemb, void *userdata);

View File

@ -52,15 +52,18 @@ bool Event::initialised = false;
char Event::capture_file_format[PATH_MAX]; char Event::capture_file_format[PATH_MAX];
char Event::analyse_file_format[PATH_MAX]; char Event::analyse_file_format[PATH_MAX];
char Event::general_file_format[PATH_MAX]; char Event::general_file_format[PATH_MAX];
char Event::video_file_format[PATH_MAX];
int Event::pre_alarm_count = 0; int Event::pre_alarm_count = 0;
Event::PreAlarmData Event::pre_alarm_data[MAX_PRE_ALARM_FRAMES] = { { 0 } }; Event::PreAlarmData Event::pre_alarm_data[MAX_PRE_ALARM_FRAMES] = { { 0 } };
Event::Event( Monitor *p_monitor, struct timeval p_start_time, const std::string &p_cause, const StringSetMap &p_noteSetMap ) : Event::Event( Monitor *p_monitor, struct timeval p_start_time, const std::string &p_cause, const StringSetMap &p_noteSetMap, bool p_videoEvent ) :
monitor( p_monitor ), monitor( p_monitor ),
start_time( p_start_time ), start_time( p_start_time ),
cause( p_cause ), cause( p_cause ),
noteSetMap( p_noteSetMap ) noteSetMap( p_noteSetMap ),
videoEvent( p_videoEvent ),
videowriter( NULL )
{ {
if ( !initialised ) if ( !initialised )
Initialise(); Initialise();
@ -78,7 +81,7 @@ Event::Event( Monitor *p_monitor, struct timeval p_start_time, const std::string
static char sql[ZM_SQL_MED_BUFSIZ]; static char sql[ZM_SQL_MED_BUFSIZ];
struct tm *stime = localtime( &start_time.tv_sec ); struct tm *stime = localtime( &start_time.tv_sec );
snprintf( sql, sizeof(sql), "insert into Events ( MonitorId, Name, StartTime, Width, Height, Cause, Notes ) values ( %d, 'New Event', from_unixtime( %ld ), %d, %d, '%s', '%s' )", monitor->Id(), start_time.tv_sec, monitor->Width(), monitor->Height(), cause.c_str(), notes.c_str() ); snprintf( sql, sizeof(sql), "insert into Events ( MonitorId, Name, StartTime, Width, Height, Cause, Notes, Videoed ) values ( %d, 'New Event', from_unixtime( %ld ), %d, %d, '%s', '%s', '%d' )", monitor->Id(), start_time.tv_sec, monitor->Width(), monitor->Height(), cause.c_str(), notes.c_str(), videoEvent );
if ( mysql_query( &dbconn, sql ) ) if ( mysql_query( &dbconn, sql ) )
{ {
Error( "Can't insert event: %s", mysql_error( &dbconn ) ); Error( "Can't insert event: %s", mysql_error( &dbconn ) );
@ -168,6 +171,48 @@ Event::Event( Monitor *p_monitor, struct timeval p_start_time, const std::string
Fatal( "Can't fopen %s: %s", id_file, strerror(errno)); Fatal( "Can't fopen %s: %s", id_file, strerror(errno));
} }
last_db_frame = 0; last_db_frame = 0;
video_name[0] = 0;
/* Save as video */
if ( monitor->GetOptVideoWriter() != 0 ) {
int nRet;
snprintf( video_name, sizeof(video_name), "%d-%s", id, "video.mp4" );
snprintf( video_file, sizeof(video_file), video_file_format, path, video_name );
snprintf( timecodes_name, sizeof(timecodes_name), "%d-%s", id, "video.timecodes" );
snprintf( timecodes_file, sizeof(timecodes_file), video_file_format, path, timecodes_name );
/* X264 MP4 video writer */
if(monitor->GetOptVideoWriter() == 1) {
#if ZM_HAVE_VIDEOWRITER_X264MP4
videowriter = new X264MP4Writer(video_file, monitor->Width(), monitor->Height(), monitor->Colours(), monitor->SubpixelOrder(), monitor->GetOptEncoderParams());
#else
videowriter = NULL;
Error("ZoneMinder was not compiled with the X264 MP4 video writer, check dependencies (x264 and mp4v2)");
#endif
}
if(videowriter != NULL) {
/* Open the video stream */
nRet = videowriter->Open();
if(nRet != 0) {
Error("Failed opening video stream");
delete videowriter;
videowriter = NULL;
}
/* Create timecodes file */
timecodes_fd = fopen(timecodes_file, "wb");
if(timecodes_fd == NULL) {
Error("Failed creating timecodes file");
}
}
} else {
/* No video object */
videowriter = NULL;
}
} }
Event::~Event() Event::~Event()
@ -187,12 +232,28 @@ Event::~Event()
} }
} }
/* Close the video file */
if ( videowriter != NULL ) {
int nRet;
nRet = videowriter->Close();
if(nRet != 0) {
Error("Failed closing video stream");
}
delete videowriter;
videowriter = NULL;
/* Close the timecodes file */
fclose(timecodes_fd);
timecodes_fd = NULL;
}
static char sql[ZM_SQL_MED_BUFSIZ]; static char sql[ZM_SQL_MED_BUFSIZ];
struct DeltaTimeval delta_time; struct DeltaTimeval delta_time;
DELTA_TIMEVAL( delta_time, end_time, start_time, DT_PREC_2 ); DELTA_TIMEVAL( delta_time, end_time, start_time, DT_PREC_2 );
snprintf( sql, sizeof(sql), "update Events set Name='%s%d', EndTime = from_unixtime( %ld ), Length = %s%ld.%02ld, Frames = %d, AlarmFrames = %d, TotScore = %d, AvgScore = %d, MaxScore = %d where Id = %d", monitor->EventPrefix(), id, end_time.tv_sec, delta_time.positive?"":"-", delta_time.sec, delta_time.fsec, frames, alarm_frames, tot_score, (int)(alarm_frames?(tot_score/alarm_frames):0), max_score, id ); snprintf( sql, sizeof(sql), "update Events set Name='%s%d', EndTime = from_unixtime( %ld ), Length = %s%ld.%02ld, Frames = %d, AlarmFrames = %d, TotScore = %d, AvgScore = %d, MaxScore = %d, DefaultVideo = '%s' where Id = %d", monitor->EventPrefix(), id, end_time.tv_sec, delta_time.positive?"":"-", delta_time.sec, delta_time.fsec, frames, alarm_frames, tot_score, (int)(alarm_frames?(tot_score/alarm_frames):0), max_score, video_name, id );
if ( mysql_query( &dbconn, sql ) ) if ( mysql_query( &dbconn, sql ) )
{ {
Error( "Can't update event: %s", mysql_error( &dbconn ) ); Error( "Can't update event: %s", mysql_error( &dbconn ) );
@ -240,6 +301,41 @@ bool Event::WriteFrameImage( Image *image, struct timeval timestamp, const char
return( true ); return( true );
} }
bool Event::WriteFrameVideo( const Image *image, const struct timeval timestamp, VideoWriter* videow )
{
const Image* frameimg = image;
Image ts_image;
/* Checking for invalid parameters */
if ( videow == NULL )
{
Error("NULL Video object");
return false;
}
/* If the image does not contain a timestamp, add the timestamp */
if (!config.timestamp_on_capture) {
ts_image = *image;
monitor->TimestampImage( &ts_image, &timestamp );
frameimg = &ts_image;
}
/* Calculate delta time */
struct DeltaTimeval delta_time3;
DELTA_TIMEVAL( delta_time3, timestamp, start_time, DT_PREC_3 );
unsigned int timeMS = (delta_time3.sec * delta_time3.prec) + delta_time3.fsec;
/* Encode and write the frame */
if(videowriter->Encode(frameimg, timeMS) != 0) {
Error("Failed encoding video frame");
}
/* Add the frame to the timecodes file */
fprintf(timecodes_fd, "%u\n", timeMS);
return( true );
}
void Event::updateNotes( const StringSetMap &newNoteSetMap ) void Event::updateNotes( const StringSetMap &newNoteSetMap )
{ {
bool update = false; bool update = false;
@ -384,9 +480,21 @@ void Event::AddFramesInternal( int n_frames, int start_frame, Image **images, st
static char event_file[PATH_MAX]; static char event_file[PATH_MAX];
snprintf( event_file, sizeof(event_file), capture_file_format, path, frames ); snprintf( event_file, sizeof(event_file), capture_file_format, path, frames );
if ( monitor->GetOptSaveJPEGs() & 4) {
Debug( 1, "Writing pre-capture frame %d", frames ); //If this is the first frame, we should add a thumbnail to the event directory
WriteFrameImage( images[i], *(timestamps[i]), event_file ); if(frames == 10){
char snapshot_file[PATH_MAX];
snprintf( snapshot_file, sizeof(snapshot_file), "%s/snapshot.jpg", path );
WriteFrameImage( images[i], *(timestamps[i]), snapshot_file );
}
}
if ( monitor->GetOptSaveJPEGs() & 1) {
Debug( 1, "Writing pre-capture frame %d", frames );
WriteFrameImage( images[i], *(timestamps[i]), event_file );
}
if ( videowriter != NULL ) {
WriteFrameVideo( images[i], *(timestamps[i]), videowriter );
}
struct DeltaTimeval delta_time; struct DeltaTimeval delta_time;
DELTA_TIMEVAL( delta_time, *(timestamps[i]), start_time, DT_PREC_2 ); DELTA_TIMEVAL( delta_time, *(timestamps[i]), start_time, DT_PREC_2 );
@ -427,8 +535,21 @@ void Event::AddFrame( Image *image, struct timeval timestamp, int score, Image *
static char event_file[PATH_MAX]; static char event_file[PATH_MAX];
snprintf( event_file, sizeof(event_file), capture_file_format, path, frames ); snprintf( event_file, sizeof(event_file), capture_file_format, path, frames );
Debug( 1, "Writing capture frame %d", frames ); if ( monitor->GetOptSaveJPEGs() & 4) {
WriteFrameImage( image, timestamp, event_file ); //If this is the first frame, we should add a thumbnail to the event directory
if(frames == 10){
char snapshot_file[PATH_MAX];
snprintf( snapshot_file, sizeof(snapshot_file), "%s/snapshot.jpg", path );
WriteFrameImage( image, timestamp, snapshot_file );
}
}
if( monitor->GetOptSaveJPEGs() & 1) {
Debug( 1, "Writing capture frame %d", frames );
WriteFrameImage( image, timestamp, event_file );
}
if ( videowriter != NULL ) {
WriteFrameVideo( image, timestamp, videowriter );
}
struct DeltaTimeval delta_time; struct DeltaTimeval delta_time;
DELTA_TIMEVAL( delta_time, timestamp, start_time, DT_PREC_2 ); DELTA_TIMEVAL( delta_time, timestamp, start_time, DT_PREC_2 );
@ -479,7 +600,9 @@ void Event::AddFrame( Image *image, struct timeval timestamp, int score, Image *
snprintf( event_file, sizeof(event_file), analyse_file_format, path, frames ); snprintf( event_file, sizeof(event_file), analyse_file_format, path, frames );
Debug( 1, "Writing analysis frame %d", frames ); Debug( 1, "Writing analysis frame %d", frames );
WriteFrameImage( alarm_image, timestamp, event_file, true ); if ( monitor->GetOptSaveJPEGs() & 2) {
WriteFrameImage( alarm_image, timestamp, event_file, true );
}
} }
} }
@ -1206,10 +1329,10 @@ bool EventStream::sendFrame( int delta_us )
Error("Unable to send raw frame %u: %s",curr_frame_id,strerror(errno)); Error("Unable to send raw frame %u: %s",curr_frame_id,strerror(errno));
return( false ); return( false );
} }
#endif #endif
fclose(fdj); /* Close the file handle */ fclose(fdj); /* Close the file handle */
} else { } else {
fprintf( stdout, "Content-Length: %d\r\n\r\n", img_buffer_size ); fprintf( stdout, "Content-Length: %d\r\n\r\n", img_buffer_size );
if ( fwrite( img_buffer, img_buffer_size, 1, stdout ) != 1 ) if ( fwrite( img_buffer, img_buffer_size, 1, stdout ) != 1 )
{ {
Error( "Unable to send stream frame: %s", strerror(errno) ); Error( "Unable to send stream frame: %s", strerror(errno) );

View File

@ -37,6 +37,7 @@
#include "zm.h" #include "zm.h"
#include "zm_image.h" #include "zm_image.h"
#include "zm_stream.h" #include "zm_stream.h"
#include "zm_video.h"
class Zone; class Zone;
class Monitor; class Monitor;
@ -55,6 +56,7 @@ protected:
static char capture_file_format[PATH_MAX]; static char capture_file_format[PATH_MAX];
static char analyse_file_format[PATH_MAX]; static char analyse_file_format[PATH_MAX];
static char general_file_format[PATH_MAX]; static char general_file_format[PATH_MAX];
static char video_file_format[PATH_MAX];
protected: protected:
static int sd; static int sd;
@ -84,11 +86,18 @@ protected:
struct timeval end_time; struct timeval end_time;
std::string cause; std::string cause;
StringSetMap noteSetMap; StringSetMap noteSetMap;
bool videoEvent;
int frames; int frames;
int alarm_frames; int alarm_frames;
unsigned int tot_score; unsigned int tot_score;
unsigned int max_score; unsigned int max_score;
char path[PATH_MAX]; char path[PATH_MAX];
VideoWriter* videowriter;
FILE* timecodes_fd;
char video_name[PATH_MAX];
char video_file[PATH_MAX];
char timecodes_name[PATH_MAX];
char timecodes_file[PATH_MAX];
protected: protected:
int last_db_frame; int last_db_frame;
@ -102,6 +111,7 @@ protected:
snprintf( capture_file_format, sizeof(capture_file_format), "%%s/%%0%dd-capture.jpg", config.event_image_digits ); snprintf( capture_file_format, sizeof(capture_file_format), "%%s/%%0%dd-capture.jpg", config.event_image_digits );
snprintf( analyse_file_format, sizeof(analyse_file_format), "%%s/%%0%dd-analyse.jpg", config.event_image_digits ); snprintf( analyse_file_format, sizeof(analyse_file_format), "%%s/%%0%dd-analyse.jpg", config.event_image_digits );
snprintf( general_file_format, sizeof(general_file_format), "%%s/%%0%dd-%%s", config.event_image_digits ); snprintf( general_file_format, sizeof(general_file_format), "%%s/%%0%dd-%%s", config.event_image_digits );
snprintf( video_file_format, sizeof(video_file_format), "%%s/%%s");
initialised = true; initialised = true;
} }
@ -113,7 +123,7 @@ public:
static bool ValidateFrameSocket( int ); static bool ValidateFrameSocket( int );
public: public:
Event( Monitor *p_monitor, struct timeval p_start_time, const std::string &p_cause, const StringSetMap &p_noteSetMap ); Event( Monitor *p_monitor, struct timeval p_start_time, const std::string &p_cause, const StringSetMap &p_noteSetMap, bool p_videoEvent=false );
~Event(); ~Event();
int Id() const { return( id ); } int Id() const { return( id ); }
@ -127,6 +137,7 @@ public:
bool SendFrameImage( const Image *image, bool alarm_frame=false ); bool SendFrameImage( const Image *image, bool alarm_frame=false );
bool WriteFrameImage( Image *image, struct timeval timestamp, const char *event_file, bool alarm_frame=false ); bool WriteFrameImage( Image *image, struct timeval timestamp, const char *event_file, bool alarm_frame=false );
bool WriteFrameVideo( const Image *image, const struct timeval timestamp, VideoWriter* videow );
void updateNotes( const StringSetMap &stringSetMap ); void updateNotes( const StringSetMap &stringSetMap );
@ -148,6 +159,11 @@ public:
return( Event::getSubPath( localtime( time ) ) ); return( Event::getSubPath( localtime( time ) ) );
} }
char* getEventFile(void)
{
return video_file;
}
public: public:
static int PreAlarmCount() static int PreAlarmCount()
{ {

View File

@ -23,6 +23,16 @@
#if HAVE_LIBAVCODEC || HAVE_LIBAVUTIL || HAVE_LIBSWSCALE #if HAVE_LIBAVCODEC || HAVE_LIBAVUTIL || HAVE_LIBSWSCALE
void FFMPEGInit() {
static bool bInit = false;
if(!bInit) {
av_register_all();
av_log_set_level(AV_LOG_DEBUG);
bInit = true;
}
}
#if HAVE_LIBAVUTIL #if HAVE_LIBAVUTIL
enum _AVPIXELFORMAT GetFFMPEGPixelFormat(unsigned int p_colours, unsigned p_subpixelorder) { enum _AVPIXELFORMAT GetFFMPEGPixelFormat(unsigned int p_colours, unsigned p_subpixelorder) {
enum _AVPIXELFORMAT pf; enum _AVPIXELFORMAT pf;
@ -189,11 +199,11 @@ int SWScale::Convert(const uint8_t* in_buffer, const size_t in_buffer_size, uint
Error("NULL Input or output buffer"); Error("NULL Input or output buffer");
return -1; return -1;
} }
if(in_pf == 0 || out_pf == 0) { // if(in_pf == 0 || out_pf == 0) {
Error("Invalid input or output pixel formats"); // Error("Invalid input or output pixel formats");
return -2; // return -2;
} // }
if(!width || !height) { if (!width || !height) {
Error("Invalid width or height"); Error("Invalid width or height");
return -3; return -3;
} }
@ -229,18 +239,30 @@ int SWScale::Convert(const uint8_t* in_buffer, const size_t in_buffer_size, uint
} }
/* Get the context */ /* Get the context */
swscale_ctx = sws_getCachedContext( NULL, width, height, in_pf, width, height, out_pf, 0, NULL, NULL, NULL ); swscale_ctx = sws_getCachedContext(swscale_ctx, width, height, in_pf, width, height, out_pf, 0, NULL, NULL, NULL);
if(swscale_ctx == NULL) { if(swscale_ctx == NULL) {
Error("Failed getting swscale context"); Error("Failed getting swscale context");
return -6; return -6;
} }
/* Fill in the buffers */ /* Fill in the buffers */
if(!avpicture_fill( (AVPicture*)input_avframe, (uint8_t*)in_buffer, in_pf, width, height ) ) { #if LIBAVUTIL_VERSION_CHECK(54, 6, 0, 6, 0)
if (av_image_fill_arrays(input_avframe->data, input_avframe->linesize,
(uint8_t*) in_buffer, in_pf, width, height, 1) <= 0) {
#else
if (avpicture_fill((AVPicture*) input_avframe, (uint8_t*) in_buffer,
in_pf, width, height) <= 0) {
#endif
Error("Failed filling input frame with input buffer"); Error("Failed filling input frame with input buffer");
return -7; return -7;
} }
if(!avpicture_fill( (AVPicture*)output_avframe, out_buffer, out_pf, width, height ) ) { #if LIBAVUTIL_VERSION_CHECK(54, 6, 0, 6, 0)
if (av_image_fill_arrays(output_avframe->data, output_avframe->linesize,
out_buffer, out_pf, width, height, 1) <= 0) {
#else
if (avpicture_fill((AVPicture*) output_avframe, out_buffer, out_pf, width,
height) <= 0) {
#endif
Error("Failed filling output frame with output buffer"); Error("Failed filling output frame with output buffer");
return -8; return -8;
} }
@ -291,3 +313,169 @@ int SWScale::ConvertDefaults(const uint8_t* in_buffer, const size_t in_buffer_si
#endif // HAVE_LIBAVCODEC || HAVE_LIBAVUTIL || HAVE_LIBSWSCALE #endif // HAVE_LIBAVCODEC || HAVE_LIBAVUTIL || HAVE_LIBSWSCALE
#if HAVE_LIBAVUTIL
int64_t av_rescale_delta(AVRational in_tb, int64_t in_ts, AVRational fs_tb, int duration, int64_t *last, AVRational out_tb){
int64_t a, b, this_thing;
av_assert0(in_ts != AV_NOPTS_VALUE);
av_assert0(duration >= 0);
if (*last == AV_NOPTS_VALUE || !duration || in_tb.num*(int64_t)out_tb.den <= out_tb.num*(int64_t)in_tb.den) {
simple_round:
*last = av_rescale_q(in_ts, in_tb, fs_tb) + duration;
return av_rescale_q(in_ts, in_tb, out_tb);
}
a = av_rescale_q_rnd(2*in_ts-1, in_tb, fs_tb, AV_ROUND_DOWN) >>1;
b = (av_rescale_q_rnd(2*in_ts+1, in_tb, fs_tb, AV_ROUND_UP )+1)>>1;
if (*last < 2*a - b || *last > 2*b - a)
goto simple_round;
this_thing = av_clip64(*last, a, b);
*last = this_thing + duration;
return av_rescale_q(this_thing, fs_tb, out_tb);
}
#endif
int hacked_up_context2_for_older_ffmpeg(AVFormatContext **avctx, AVOutputFormat *oformat, const char *format, const char *filename) {
AVFormatContext *s = avformat_alloc_context();
int ret = 0;
*avctx = NULL;
if (!s) {
av_log(s, AV_LOG_ERROR, "Out of memory\n");
ret = AVERROR(ENOMEM);
return ret;
}
if (!oformat) {
if (format) {
oformat = av_guess_format(format, NULL, NULL);
if (!oformat) {
av_log(s, AV_LOG_ERROR, "Requested output format '%s' is not a suitable output format\n", format);
ret = AVERROR(EINVAL);
}
} else {
oformat = av_guess_format(NULL, filename, NULL);
if (!oformat) {
ret = AVERROR(EINVAL);
av_log(s, AV_LOG_ERROR, "Unable to find a suitable output format for '%s'\n", filename);
}
}
}
if (ret) {
avformat_free_context(s);
return ret;
} else {
s->oformat = oformat;
if (s->oformat->priv_data_size > 0) {
s->priv_data = av_mallocz(s->oformat->priv_data_size);
if (s->priv_data) {
if (s->oformat->priv_class) {
*(const AVClass**)s->priv_data= s->oformat->priv_class;
av_opt_set_defaults(s->priv_data);
}
} else {
av_log(s, AV_LOG_ERROR, "Out of memory\n");
ret = AVERROR(ENOMEM);
return ret;
}
s->priv_data = NULL;
}
if (filename) strncpy(s->filename, filename, sizeof(s->filename));
*avctx = s;
return 0;
}
}
static void zm_log_fps(double d, const char *postfix) {
uint64_t v = lrintf(d * 100);
if (!v) {
Debug(3, "%1.4f %s", d, postfix);
} else if (v % 100) {
Debug(3, "%3.2f %s", d, postfix);
} else if (v % (100 * 1000)) {
Debug(3, "%1.0f %s", d, postfix);
} else
Debug(3, "%1.0fk %s", d / 1000, postfix);
}
/* "user interface" functions */
void zm_dump_stream_format(AVFormatContext *ic, int i, int index, int is_output) {
char buf[256];
Debug(1, "Dumping stream index i(%d) index(%d)", i, index );
int flags = (is_output ? ic->oformat->flags : ic->iformat->flags);
AVStream *st = ic->streams[i];
AVDictionaryEntry *lang = av_dict_get(st->metadata, "language", NULL, 0);
avcodec_string(buf, sizeof(buf), st->codec, is_output);
Debug(3, " Stream #%d:%d", index, i);
/* the pid is an important information, so we display it */
/* XXX: add a generic system */
if (flags & AVFMT_SHOW_IDS)
Debug(3, "[0x%x]", st->id);
if (lang)
Debug(3, "(%s)", lang->value);
av_log(NULL, AV_LOG_DEBUG, ", %d, %d/%d", st->codec_info_nb_frames,
st->time_base.num, st->time_base.den);
Debug(3, ": %s", buf);
if (st->sample_aspect_ratio.num && // default
av_cmp_q(st->sample_aspect_ratio, st->codec->sample_aspect_ratio)) {
AVRational display_aspect_ratio;
av_reduce(&display_aspect_ratio.num, &display_aspect_ratio.den,
st->codec->width * (int64_t)st->sample_aspect_ratio.num,
st->codec->height * (int64_t)st->sample_aspect_ratio.den,
1024 * 1024);
Debug(3, ", SAR %d:%d DAR %d:%d",
st->sample_aspect_ratio.num, st->sample_aspect_ratio.den,
display_aspect_ratio.num, display_aspect_ratio.den);
}
if (st->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
int fps = st->avg_frame_rate.den && st->avg_frame_rate.num;
int tbn = st->time_base.den && st->time_base.num;
int tbc = st->codec->time_base.den && st->codec->time_base.num;
if (fps || tbn || tbc)
Debug(3, "\n" );
if (fps)
zm_log_fps(av_q2d(st->avg_frame_rate), tbn || tbc ? "fps, " : "fps");
if (tbn)
zm_log_fps(1 / av_q2d(st->time_base), tbc ? "tbn, " : "tbn");
if (tbc)
zm_log_fps(1 / av_q2d(st->codec->time_base), "tbc");
}
if (st->disposition & AV_DISPOSITION_DEFAULT)
Debug(3, " (default)");
if (st->disposition & AV_DISPOSITION_DUB)
Debug(3, " (dub)");
if (st->disposition & AV_DISPOSITION_ORIGINAL)
Debug(3, " (original)");
if (st->disposition & AV_DISPOSITION_COMMENT)
Debug(3, " (comment)");
if (st->disposition & AV_DISPOSITION_LYRICS)
Debug(3, " (lyrics)");
if (st->disposition & AV_DISPOSITION_KARAOKE)
Debug(3, " (karaoke)");
if (st->disposition & AV_DISPOSITION_FORCED)
Debug(3, " (forced)");
if (st->disposition & AV_DISPOSITION_HEARING_IMPAIRED)
Debug(3, " (hearing impaired)");
if (st->disposition & AV_DISPOSITION_VISUAL_IMPAIRED)
Debug(3, " (visual impaired)");
if (st->disposition & AV_DISPOSITION_CLEAN_EFFECTS)
Debug(3, " (clean effects)");
Debug(3, "\n");
//dump_metadata(NULL, st->metadata, " ");
//dump_sidedata(NULL, st, " ");
}

View File

@ -29,6 +29,7 @@ extern "C" {
// AVUTIL // AVUTIL
#if HAVE_LIBAVUTIL_AVUTIL_H #if HAVE_LIBAVUTIL_AVUTIL_H
#include "libavutil/avassert.h"
#include <libavutil/avutil.h> #include <libavutil/avutil.h>
#include <libavutil/base64.h> #include <libavutil/base64.h>
#include <libavutil/mathematics.h> #include <libavutil/mathematics.h>
@ -199,6 +200,9 @@ extern "C" {
#endif #endif
#endif #endif
/* A single function to initialize ffmpeg, to avoid multiple initializations */
void FFMPEGInit();
#if HAVE_LIBAVUTIL #if HAVE_LIBAVUTIL
enum _AVPIXELFORMAT GetFFMPEGPixelFormat(unsigned int p_colours, unsigned p_subpixelorder); enum _AVPIXELFORMAT GetFFMPEGPixelFormat(unsigned int p_colours, unsigned p_subpixelorder);
#endif // HAVE_LIBAVUTIL #endif // HAVE_LIBAVUTIL
@ -288,4 +292,36 @@ protected:
#endif // ( HAVE_LIBAVUTIL_AVUTIL_H || HAVE_LIBAVCODEC_AVCODEC_H || HAVE_LIBAVFORMAT_AVFORMAT_H || HAVE_LIBAVDEVICE_AVDEVICE_H ) #endif // ( HAVE_LIBAVUTIL_AVUTIL_H || HAVE_LIBAVCODEC_AVCODEC_H || HAVE_LIBAVFORMAT_AVFORMAT_H || HAVE_LIBAVDEVICE_AVDEVICE_H )
#ifndef avformat_alloc_output_context2
int hacked_up_context2_for_older_ffmpeg(AVFormatContext **avctx, AVOutputFormat *oformat, const char *format, const char *filename);
#define avformat_alloc_output_context2(x,y,z,a) hacked_up_context2_for_older_ffmpeg(x,y,z,a)
#endif
#ifndef av_rescale_delta
/**
* Rescale a timestamp while preserving known durations.
*/
int64_t av_rescale_delta(AVRational in_tb, int64_t in_ts, AVRational fs_tb, int duration, int64_t *last, AVRational out_tb);
#endif
#ifndef av_clip64
/**
* Clip a signed 64bit integer value into the amin-amax range.
* @param a value to clip
* @param amin minimum value of the clip range
* @param amax maximum value of the clip range
* @return clipped value
*/
static av_always_inline av_const int64_t av_clip64_c(int64_t a, int64_t amin, int64_t amax)
{
if (a < amin) return amin;
else if (a > amax) return amax;
else return a;
}
#define av_clip64 av_clip64_c
#endif
void zm_dump_stream_format(AVFormatContext *ic, int i, int index, int is_output);
#endif // ZM_FFMPEG_H #endif // ZM_FFMPEG_H

View File

@ -23,6 +23,9 @@
#include "zm_ffmpeg_camera.h" #include "zm_ffmpeg_camera.h"
extern "C"{
#include "libavutil/time.h"
}
#ifndef AV_ERROR_MAX_STRING_SIZE #ifndef AV_ERROR_MAX_STRING_SIZE
#define AV_ERROR_MAX_STRING_SIZE 64 #define AV_ERROR_MAX_STRING_SIZE 64
#endif #endif
@ -33,8 +36,8 @@
#include <pthread.h> #include <pthread.h>
#endif #endif
FfmpegCamera::FfmpegCamera( int p_id, const std::string &p_path, const std::string &p_method, const std::string &p_options, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ) : FfmpegCamera::FfmpegCamera( int p_id, const std::string &p_path, const std::string &p_method, const std::string &p_options, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio ) :
Camera( p_id, FFMPEG_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture ), Camera( p_id, FFMPEG_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture, p_record_audio ),
mPath( p_path ), mPath( p_path ),
mMethod( p_method ), mMethod( p_method ),
mOptions( p_options ) mOptions( p_options )
@ -46,15 +49,19 @@ FfmpegCamera::FfmpegCamera( int p_id, const std::string &p_path, const std::stri
mFormatContext = NULL; mFormatContext = NULL;
mVideoStreamId = -1; mVideoStreamId = -1;
mAudioStreamId = -1;
mCodecContext = NULL; mCodecContext = NULL;
mCodec = NULL; mCodec = NULL;
mRawFrame = NULL; mRawFrame = NULL;
mFrame = NULL; mFrame = NULL;
frameCount = 0; frameCount = 0;
startTime=0;
mIsOpening = false; mIsOpening = false;
mCanCapture = false; mCanCapture = false;
mOpenStart = 0; mOpenStart = 0;
mReopenThread = 0; mReopenThread = 0;
wasRecording = false;
videoStore = NULL;
#if HAVE_LIBSWSCALE #if HAVE_LIBSWSCALE
mConvertContext = NULL; mConvertContext = NULL;
@ -101,6 +108,8 @@ void FfmpegCamera::Terminate()
int FfmpegCamera::PrimeCapture() int FfmpegCamera::PrimeCapture()
{ {
mVideoStreamId = -1;
mAudioStreamId = -1;
Info( "Priming capture from %s", mPath.c_str() ); Info( "Priming capture from %s", mPath.c_str() );
if (OpenFfmpeg() != 0){ if (OpenFfmpeg() != 0){
@ -168,6 +177,7 @@ int FfmpegCamera::Capture( Image &image )
return( -1 ); return( -1 );
} }
Debug( 5, "Got packet from stream %d", packet.stream_index ); Debug( 5, "Got packet from stream %d", packet.stream_index );
// What about audio stream? Maybe someday we could do sound detection...
if ( packet.stream_index == mVideoStreamId ) if ( packet.stream_index == mVideoStreamId )
{ {
#if LIBAVCODEC_VERSION_CHECK(52, 23, 0, 23, 0) #if LIBAVCODEC_VERSION_CHECK(52, 23, 0, 23, 0)
@ -205,16 +215,18 @@ int FfmpegCamera::Capture( Image &image )
#endif // HAVE_LIBSWSCALE #endif // HAVE_LIBSWSCALE
frameCount++; frameCount++;
} } // end if frameComplete
} } else {
Debug( 4, "Different stream_index %d", packet.stream_index );
} // end if packet.stream_index == mVideoStreamId
#if LIBAVCODEC_VERSION_CHECK(57, 8, 0, 12, 100) #if LIBAVCODEC_VERSION_CHECK(57, 8, 0, 12, 100)
av_packet_unref( &packet); av_packet_unref( &packet);
#else #else
av_free_packet( &packet ); av_free_packet( &packet );
#endif #endif
} } // end while ! frameComplete
return (0); return (0);
} } // FfmpegCamera::Capture
int FfmpegCamera::PostCapture() int FfmpegCamera::PostCapture()
{ {
@ -278,6 +290,12 @@ int FfmpegCamera::OpenFfmpeg() {
mIsOpening = false; mIsOpening = false;
Debug ( 1, "Opened input" ); Debug ( 1, "Opened input" );
Info( "Stream open %s", mPath.c_str() );
startTime=av_gettime();//FIXME here or after find_Stream_info
//FIXME can speed up initial analysis but need sensible parameters...
//mFormatContext->probesize = 32;
//mFormatContext->max_analyze_duration = 32;
// Locate stream info from avformat_open_input // Locate stream info from avformat_open_input
#if !LIBAVFORMAT_VERSION_CHECK(53, 6, 0, 6, 0) #if !LIBAVFORMAT_VERSION_CHECK(53, 6, 0, 6, 0)
Debug ( 1, "Calling av_find_stream_info" ); Debug ( 1, "Calling av_find_stream_info" );
@ -292,6 +310,7 @@ int FfmpegCamera::OpenFfmpeg() {
// Find first video stream present // Find first video stream present
mVideoStreamId = -1; mVideoStreamId = -1;
mAudioStreamId = -1;
for (unsigned int i=0; i < mFormatContext->nb_streams; i++ ) for (unsigned int i=0; i < mFormatContext->nb_streams; i++ )
{ {
#if (LIBAVCODEC_VERSION_CHECK(52, 64, 0, 64, 0) || LIBAVUTIL_VERSION_CHECK(50, 14, 0, 14, 0)) #if (LIBAVCODEC_VERSION_CHECK(52, 64, 0, 64, 0) || LIBAVUTIL_VERSION_CHECK(50, 14, 0, 14, 0))
@ -300,12 +319,32 @@ int FfmpegCamera::OpenFfmpeg() {
if ( mFormatContext->streams[i]->codec->codec_type == CODEC_TYPE_VIDEO ) if ( mFormatContext->streams[i]->codec->codec_type == CODEC_TYPE_VIDEO )
#endif #endif
{ {
mVideoStreamId = i; if ( mVideoStreamId == -1 ) {
break; mVideoStreamId = i;
// if we break, then we won't find the audio stream
continue;
} else {
Debug(2, "Have another video stream." );
}
}
#if (LIBAVCODEC_VERSION_CHECK(52, 64, 0, 64, 0) || LIBAVUTIL_VERSION_CHECK(50, 14, 0, 14, 0))
if ( mFormatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO )
#else
if ( mFormatContext->streams[i]->codec->codec_type == CODEC_TYPE_AUDIO )
#endif
{
if ( mAudioStreamId == -1 ) {
mAudioStreamId = i;
} else {
Debug(2, "Have another audio stream." );
}
} }
} }
if ( mVideoStreamId == -1 ) if ( mVideoStreamId == -1 )
Fatal( "Unable to locate video stream in %s", mPath.c_str() ); Fatal( "Unable to locate video stream in %s", mPath.c_str() );
if ( mAudioStreamId == -1 )
Debug( 3, "Unable to locate audio stream in %s", mPath.c_str() );
Debug ( 1, "Found video stream" ); Debug ( 1, "Found video stream" );
@ -469,4 +508,212 @@ void *FfmpegCamera::ReopenFfmpegThreadCallback(void *ctx){
} }
} }
//Function to handle capture and store
int FfmpegCamera::CaptureAndRecord( Image &image, bool recording, char* event_file ){
if (!mCanCapture){
return -1;
}
// If the reopen thread has a value, but mCanCapture != 0, then we have just reopened the connection to the ffmpeg device, and we can clean up the thread.
if (mReopenThread != 0) {
void *retval = 0;
int ret;
ret = pthread_join(mReopenThread, &retval);
if (ret != 0){
Error("Could not join reopen thread.");
}
Info( "Successfully reopened stream." );
mReopenThread = 0;
}
AVPacket packet;
uint8_t* directbuffer;
/* Request a writeable buffer of the target image */
directbuffer = image.WriteBuffer(width, height, colours, subpixelorder);
if( directbuffer == NULL ) {
Error("Failed requesting writeable buffer for the captured image.");
return (-1);
}
if ( mCodecContext->codec_id != AV_CODEC_ID_H264 ) {
Error( "Input stream is not h264. The stored event file may not be viewable in browser." );
}
int frameComplete = false;
while ( !frameComplete ) {
int avResult = av_read_frame( mFormatContext, &packet );
if ( avResult < 0 ) {
char errbuf[AV_ERROR_MAX_STRING_SIZE];
av_strerror(avResult, errbuf, AV_ERROR_MAX_STRING_SIZE);
if (
// Check if EOF.
(avResult == AVERROR_EOF || (mFormatContext->pb && mFormatContext->pb->eof_reached)) ||
// Check for Connection failure.
(avResult == -110)
) {
Info( "av_read_frame returned \"%s\". Reopening stream.", errbuf);
ReopenFfmpeg();
}
Error( "Unable to read packet from stream %d: error %d \"%s\".", packet.stream_index, avResult, errbuf );
return( -1 );
}
Debug( 5, "Got packet from stream %d", packet.stream_index );
if ( packet.stream_index == mVideoStreamId ) {
#if LIBAVCODEC_VERSION_CHECK(52, 23, 0, 23, 0)
if ( avcodec_decode_video2( mCodecContext, mRawFrame, &frameComplete, &packet ) < 0 )
#else
if ( avcodec_decode_video( mCodecContext, mRawFrame, &frameComplete, packet.data, packet.size ) < 0 )
#endif
{
Error( "Unable to decode frame at frame %d, continuing...", frameCount );
av_free_packet( &packet );
continue;
}
Debug( 4, "Decoded video packet at frame %d", frameCount );
if ( frameComplete ) {
Debug( 3, "Got frame %d", frameCount );
avpicture_fill( (AVPicture *)mFrame, directbuffer, imagePixFormat, width, height);
//Keep the last keyframe so we can establish immediate video
/*if(packet.flags & AV_PKT_FLAG_KEY)
av_copy_packet(&lastKeyframePkt, &packet);*/
//TODO I think we need to store the key frame location for seeking as part of the event
//Video recording
if ( recording && !wasRecording ) {
//Instantiate the video storage module
if (record_audio) {
if (mAudioStreamId == -1) {
Debug(3, "Record Audio on but no audio stream found");
videoStore = new VideoStore((const char *) event_file, "mp4",
mFormatContext->streams[mVideoStreamId],
NULL,
startTime,
this->getMonitor()->getOrientation());
} else {
Debug(3, "Video module initiated with audio stream");
videoStore = new VideoStore((const char *) event_file, "mp4",
mFormatContext->streams[mVideoStreamId],
mFormatContext->streams[mAudioStreamId],
startTime,
this->getMonitor()->getOrientation());
}
} else {
Debug(3, "Record_audio is false so exclude audio stream");
videoStore = new VideoStore((const char *) event_file, "mp4",
mFormatContext->streams[mVideoStreamId],
NULL,
startTime,
this->getMonitor()->getOrientation());
}
wasRecording = true;
strcpy(oldDirectory, event_file);
} else if ( ( ! recording ) && wasRecording && videoStore ) {
Info("Deleting videoStore instance");
delete videoStore;
videoStore = NULL;
}
// The directory we are recording to is no longer tied to the current
// event. Need to re-init the videostore with the correct directory and
// start recording again
if (recording && wasRecording && (strcmp(oldDirectory, event_file) != 0)
&& (packet.flags & AV_PKT_FLAG_KEY)) {
// Don't open new videostore until we're on a key frame..would this
// require an offset adjustment for the event as a result?...if we store
// our key frame location with the event will that be enough?
Info("Re-starting video storage module");
if(videoStore){
delete videoStore;
videoStore = NULL;
}
if (record_audio) {
if (mAudioStreamId == -1) {
Debug(3, "Record Audio on but no audio stream found");
videoStore = new VideoStore((const char *) event_file, "mp4",
mFormatContext->streams[mVideoStreamId],
NULL,
startTime,
this->getMonitor()->getOrientation());
} else {
Debug(3, "Video module initiated with audio stream");
videoStore = new VideoStore((const char *) event_file, "mp4",
mFormatContext->streams[mVideoStreamId],
mFormatContext->streams[mAudioStreamId],
startTime,
this->getMonitor()->getOrientation());
}
} else {
Debug(3, "Record_audio is false so exclude audio stream");
videoStore = new VideoStore((const char *) event_file, "mp4",
mFormatContext->streams[mVideoStreamId],
NULL, startTime,
this->getMonitor()->getOrientation());
}
strcpy(oldDirectory, event_file);
}
if ( videoStore && recording ) {
//Write the packet to our video store
int ret = videoStore->writeVideoFramePacket(&packet,
mFormatContext->streams[mVideoStreamId]); //, &lastKeyframePkt);
if(ret<0){//Less than zero and we skipped a frame
av_free_packet( &packet );
return 0;
}
}
#if HAVE_LIBSWSCALE
if ( mConvertContext == NULL ) {
mConvertContext = sws_getContext(mCodecContext->width,
mCodecContext->height,
mCodecContext->pix_fmt,
width, height,
imagePixFormat, SWS_BICUBIC, NULL,
NULL, NULL);
if ( mConvertContext == NULL )
Fatal( "Unable to create conversion context for %s", mPath.c_str() );
}
if (sws_scale(mConvertContext, mRawFrame->data, mRawFrame->linesize,
0, mCodecContext->height, mFrame->data, mFrame->linesize) < 0)
Fatal("Unable to convert raw format %u to target format %u at frame %d",
mCodecContext->pix_fmt, imagePixFormat, frameCount);
#else // HAVE_LIBSWSCALE
Fatal( "You must compile ffmpeg with the --enable-swscale option to use ffmpeg cameras" );
#endif // HAVE_LIBSWSCALE
frameCount++;
} // end if frameComplete
} else if ( packet.stream_index == mAudioStreamId ) { //FIXME best way to copy all other streams
if ( videoStore && recording ) {
if ( record_audio ) {
Debug(4, "Recording audio packet" );
//Write the packet to our video store
int ret = videoStore->writeAudioFramePacket(&packet,
mFormatContext->streams[packet.stream_index]); //FIXME no relevance of last key frame
if ( ret < 0 ) {//Less than zero and we skipped a frame
av_free_packet( &packet );
return 0;
}
} else {
Debug(4, "Not recording audio packet" );
}
}
}
av_free_packet( &packet );
} // end while ! frameComplete
return (frameCount);
}
#endif // HAVE_LIBAVFORMAT #endif // HAVE_LIBAVFORMAT

View File

@ -25,6 +25,7 @@
#include "zm_buffer.h" #include "zm_buffer.h"
//#include "zm_utils.h" //#include "zm_utils.h"
#include "zm_ffmpeg.h" #include "zm_ffmpeg.h"
#include "zm_videostore.h"
// //
// Class representing 'ffmpeg' cameras, i.e. those which are // Class representing 'ffmpeg' cameras, i.e. those which are
@ -41,12 +42,13 @@ protected:
#if HAVE_LIBAVFORMAT #if HAVE_LIBAVFORMAT
AVFormatContext *mFormatContext; AVFormatContext *mFormatContext;
int mVideoStreamId; int mVideoStreamId;
int mAudioStreamId;
AVCodecContext *mCodecContext; AVCodecContext *mCodecContext;
AVCodec *mCodec; AVCodec *mCodec;
AVFrame *mRawFrame; AVFrame *mRawFrame;
AVFrame *mFrame; AVFrame *mFrame;
_AVPIXELFORMAT imagePixFormat; _AVPIXELFORMAT imagePixFormat;
int OpenFfmpeg(); int OpenFfmpeg();
int ReopenFfmpeg(); int ReopenFfmpeg();
@ -59,12 +61,19 @@ protected:
pthread_t mReopenThread; pthread_t mReopenThread;
#endif // HAVE_LIBAVFORMAT #endif // HAVE_LIBAVFORMAT
bool wasRecording;
VideoStore *videoStore;
char oldDirectory[4096];
//AVPacket lastKeyframePkt;
#if HAVE_LIBSWSCALE #if HAVE_LIBSWSCALE
struct SwsContext *mConvertContext; struct SwsContext *mConvertContext;
#endif #endif
int64_t startTime;
public: public:
FfmpegCamera( int p_id, const std::string &path, const std::string &p_method, const std::string &p_options, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ); FfmpegCamera( int p_id, const std::string &path, const std::string &p_method, const std::string &p_options, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio );
~FfmpegCamera(); ~FfmpegCamera();
const std::string &Path() const { return( mPath ); } const std::string &Path() const { return( mPath ); }
@ -77,6 +86,7 @@ public:
int PrimeCapture(); int PrimeCapture();
int PreCapture(); int PreCapture();
int Capture( Image &image ); int Capture( Image &image );
int CaptureAndRecord( Image &image, bool recording, char* event_directory );
int PostCapture(); int PostCapture();
}; };

View File

@ -34,7 +34,7 @@
#include "zm.h" #include "zm.h"
#include "zm_file_camera.h" #include "zm_file_camera.h"
FileCamera::FileCamera( int p_id, const char *p_path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ) : Camera( p_id, FILE_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture ) FileCamera::FileCamera( int p_id, const char *p_path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio ) : Camera( p_id, FILE_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture, p_record_audio )
{ {
strncpy( path, p_path, sizeof(path) ); strncpy( path, p_path, sizeof(path) );
if ( capture ) if ( capture )

View File

@ -36,7 +36,7 @@ protected:
char path[PATH_MAX]; char path[PATH_MAX];
public: public:
FileCamera( int p_id, const char *p_path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ); FileCamera( int p_id, const char *p_path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio );
~FileCamera(); ~FileCamera();
const char *Path() const { return( path ); } const char *Path() const { return( path ); }
@ -46,6 +46,7 @@ public:
int PreCapture(); int PreCapture();
int Capture( Image &image ); int Capture( Image &image );
int PostCapture(); int PostCapture();
int CaptureAndRecord( Image &image, bool recording, char* event_directory ) {return(0);};
}; };
#endif // ZM_FILE_CAMERA_H #endif // ZM_FILE_CAMERA_H

View File

@ -61,8 +61,8 @@ void LibvlcUnlockBuffer(void* opaque, void* picture, void *const *planes)
} }
} }
LibvlcCamera::LibvlcCamera( int p_id, const std::string &p_path, const std::string &p_method, const std::string &p_options, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ) : LibvlcCamera::LibvlcCamera( int p_id, const std::string &p_path, const std::string &p_method, const std::string &p_options, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio ) :
Camera( p_id, LIBVLC_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture ), Camera( p_id, LIBVLC_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture, p_record_audio ),
mPath( p_path ), mPath( p_path ),
mMethod( p_method ), mMethod( p_method ),
mOptions( p_options ) mOptions( p_options )
@ -211,6 +211,20 @@ int LibvlcCamera::Capture( Image &image )
return (0); return (0);
} }
// Should not return -1 as cancels capture. Always wait for image if available.
int LibvlcCamera::CaptureAndRecord( Image &image, bool recording, char* event_directory )
{
while(!mLibvlcData.newImage.getValueImmediate())
mLibvlcData.newImage.getUpdatedValue(1);
mLibvlcData.mutex.lock();
image.Assign(width, height, colours, subpixelorder, mLibvlcData.buffer, width * height * mBpp);
mLibvlcData.newImage.setValueImmediate(false);
mLibvlcData.mutex.unlock();
return (0);
}
int LibvlcCamera::PostCapture() int LibvlcCamera::PostCapture()
{ {
return(0); return(0);

View File

@ -57,7 +57,7 @@ protected:
libvlc_media_player_t *mLibvlcMediaPlayer; libvlc_media_player_t *mLibvlcMediaPlayer;
public: public:
LibvlcCamera( int p_id, const std::string &path, const std::string &p_method, const std::string &p_options, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ); LibvlcCamera( int p_id, const std::string &path, const std::string &p_method, const std::string &p_options, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio );
~LibvlcCamera(); ~LibvlcCamera();
const std::string &Path() const { return( mPath ); } const std::string &Path() const { return( mPath ); }
@ -70,6 +70,7 @@ public:
int PrimeCapture(); int PrimeCapture();
int PreCapture(); int PreCapture();
int Capture( Image &image ); int Capture( Image &image );
int CaptureAndRecord( Image &image, bool recording, char* event_directory );
int PostCapture(); int PostCapture();
}; };

View File

@ -286,8 +286,26 @@ AVFrame **LocalCamera::capturePictures = 0;
LocalCamera *LocalCamera::last_camera = NULL; LocalCamera *LocalCamera::last_camera = NULL;
LocalCamera::LocalCamera( int p_id, const std::string &p_device, int p_channel, int p_standard, bool p_v4l_multi_buffer, unsigned int p_v4l_captures_per_frame, const std::string &p_method, int p_width, int p_height, int p_colours, int p_palette, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, unsigned int p_extras) : LocalCamera::LocalCamera(
Camera( p_id, LOCAL_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture ), int p_id,
const std::string &p_device,
int p_channel,
int p_standard,
bool p_v4l_multi_buffer,
unsigned int p_v4l_captures_per_frame,
const std::string &p_method,
int p_width,
int p_height,
int p_colours,
int p_palette,
int p_brightness,
int p_contrast,
int p_hue,
int p_colour,
bool p_capture,
bool p_record_audio,
unsigned int p_extras) :
Camera( p_id, LOCAL_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture, p_record_audio ),
device( p_device ), device( p_device ),
channel( p_channel ), channel( p_channel ),
standard( p_standard ), standard( p_standard ),

View File

@ -121,7 +121,25 @@ protected:
static LocalCamera *last_camera; static LocalCamera *last_camera;
public: public:
LocalCamera( int p_id, const std::string &device, int p_channel, int p_format, bool v4lmultibuffer, unsigned int v4lcapturesperframe, const std::string &p_method, int p_width, int p_height, int p_colours, int p_palette, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, unsigned int p_extras = 0); LocalCamera(
int p_id,
const std::string &device,
int p_channel,
int p_format,
bool v4lmultibuffer,
unsigned int v4lcapturesperframe,
const std::string &p_method,
int p_width,
int p_height,
int p_colours,
int p_palette,
int p_brightness,
int p_contrast,
int p_hue,
int p_colour,
bool p_capture,
bool p_record_audio,
unsigned int p_extras = 0);
~LocalCamera(); ~LocalCamera();
void Initialise(); void Initialise();
@ -143,6 +161,7 @@ public:
int PreCapture(); int PreCapture();
int Capture( Image &image ); int Capture( Image &image );
int PostCapture(); int PostCapture();
int CaptureAndRecord( Image &image, bool recording, char* event_directory ) {return(0);};
static bool GetCurrentSettings( const char *device, char *output, int version, bool verbose ); static bool GetCurrentSettings( const char *device, char *output, int version, bool verbose );
}; };

View File

@ -28,6 +28,7 @@
#include "zm_mpeg.h" #include "zm_mpeg.h"
#include "zm_signal.h" #include "zm_signal.h"
#include "zm_monitor.h" #include "zm_monitor.h"
#include "zm_video.h"
#if ZM_HAS_V4L #if ZM_HAS_V4L
#include "zm_local_camera.h" #include "zm_local_camera.h"
#endif // ZM_HAS_V4L #endif // ZM_HAS_V4L
@ -276,6 +277,10 @@ Monitor::Monitor(
Camera *p_camera, Camera *p_camera,
int p_orientation, int p_orientation,
unsigned int p_deinterlacing, unsigned int p_deinterlacing,
int p_savejpegs,
int p_videowriter,
std::string p_encoderparams,
bool p_record_audio,
const char *p_event_prefix, const char *p_event_prefix,
const char *p_label_format, const char *p_label_format,
const Coord &p_label_coord, const Coord &p_label_coord,
@ -310,6 +315,10 @@ Monitor::Monitor(
height( (p_orientation==ROTATE_90||p_orientation==ROTATE_270)?p_camera->Width():p_camera->Height() ), height( (p_orientation==ROTATE_90||p_orientation==ROTATE_270)?p_camera->Width():p_camera->Height() ),
orientation( (Orientation)p_orientation ), orientation( (Orientation)p_orientation ),
deinterlacing( p_deinterlacing ), deinterlacing( p_deinterlacing ),
savejpegspref( p_savejpegs ),
videowriterpref( p_videowriter ),
encoderparams( p_encoderparams ),
record_audio( p_record_audio ),
label_coord( p_label_coord ), label_coord( p_label_coord ),
label_size( p_label_size ), label_size( p_label_size ),
image_buffer_count( p_image_buffer_count ), image_buffer_count( p_image_buffer_count ),
@ -365,6 +374,9 @@ Monitor::Monitor(
} }
} }
/* Parse encoder parameters */
ParseEncoderParameters(encoderparams.c_str(), &encoderparamsvec);
fps = 0.0; fps = 0.0;
event_count = 0; event_count = 0;
image_count = 0; image_count = 0;
@ -391,6 +403,7 @@ Monitor::Monitor(
mem_size = sizeof(SharedData) mem_size = sizeof(SharedData)
+ sizeof(TriggerData) + sizeof(TriggerData)
+ sizeof(VideoStoreData) //Information to pass back to the capture process
+ (image_buffer_count*sizeof(struct timeval)) + (image_buffer_count*sizeof(struct timeval))
+ (image_buffer_count*camera->ImageSize()) + (image_buffer_count*camera->ImageSize())
+ 64; /* Padding used to permit aligning the images buffer to 64 byte boundary */ + 64; /* Padding used to permit aligning the images buffer to 64 byte boundary */
@ -426,6 +439,10 @@ Monitor::Monitor(
trigger_data->trigger_text[0] = 0; trigger_data->trigger_text[0] = 0;
trigger_data->trigger_showtext[0] = 0; trigger_data->trigger_showtext[0] = 0;
shared_data->valid = true; shared_data->valid = true;
video_store_data->recording = false;
snprintf(video_store_data->event_file, sizeof(video_store_data->event_file), "nothing");
video_store_data->size = sizeof(VideoStoreData);
//video_store_data->frameNumber = 0;
} else if ( purpose == ANALYSIS ) { } else if ( purpose == ANALYSIS ) {
this->connect(); this->connect();
if ( ! mem_ptr ) exit(-1); if ( ! mem_ptr ) exit(-1);
@ -445,6 +462,7 @@ Monitor::Monitor(
} }
// Will this not happen every time a monitor is instantiated? Seems like all the calls to the Monitor constructor pass a zero for n_zones, then load zones after.. // Will this not happen every time a monitor is instantiated? Seems like all the calls to the Monitor constructor pass a zero for n_zones, then load zones after..
// In my storage areas branch, I took this out.. and didn't notice any problems.
if ( !n_zones ) { if ( !n_zones ) {
Debug( 1, "Monitor %s has no zones, adding one.", name ); Debug( 1, "Monitor %s has no zones, adding one.", name );
n_zones = 1; n_zones = 1;
@ -566,7 +584,8 @@ bool Monitor::connect() {
#endif // ZM_MEM_MAPPED #endif // ZM_MEM_MAPPED
shared_data = (SharedData *)mem_ptr; shared_data = (SharedData *)mem_ptr;
trigger_data = (TriggerData *)((char *)shared_data + sizeof(SharedData)); trigger_data = (TriggerData *)((char *)shared_data + sizeof(SharedData));
struct timeval *shared_timestamps = (struct timeval *)((char *)trigger_data + sizeof(TriggerData)); video_store_data = (VideoStoreData *)((char *)trigger_data + sizeof(TriggerData));
struct timeval *shared_timestamps = (struct timeval *)((char *)video_store_data + sizeof(VideoStoreData));
unsigned char *shared_images = (unsigned char *)((char *)shared_timestamps + (image_buffer_count*sizeof(struct timeval))); unsigned char *shared_images = (unsigned char *)((char *)shared_timestamps + (image_buffer_count*sizeof(struct timeval)));
if(((unsigned long)shared_images % 64) != 0) { if(((unsigned long)shared_images % 64) != 0) {
@ -621,9 +640,10 @@ Monitor::~Monitor()
privacy_bitmask = NULL; privacy_bitmask = NULL;
} }
if ( mem_ptr ) { if ( mem_ptr ) {
if ( event ) if ( event ) {
Info( "%s: %03d - Closing event %d, shutting down", name, image_count, event->Id() ); Info( "%s: %03d - Closing event %d, shutting down", name, image_count, event->Id() );
closeEvent(); closeEvent();
}
if ( (deinterlacing & 0xff) == 4) if ( (deinterlacing & 0xff) == 4)
{ {
@ -1284,6 +1304,7 @@ bool Monitor::Analyse()
{ {
if ( shared_data->last_read_index == shared_data->last_write_index ) if ( shared_data->last_read_index == shared_data->last_write_index )
{ {
// I wonder how often this happens. Maybe if this happens we should sleep or something?
return( false ); return( false );
} }
@ -1304,8 +1325,10 @@ bool Monitor::Analyse()
if ( read_margin < 0 ) read_margin += image_buffer_count; if ( read_margin < 0 ) read_margin += image_buffer_count;
int step = 1; int step = 1;
// Isn't read_margin always > 0 here?
if ( read_margin > 0 ) if ( read_margin > 0 )
{ {
// TODO explain this so... 90% of image buffer / 50% of read margin?
step = (9*image_buffer_count)/(5*read_margin); step = (9*image_buffer_count)/(5*read_margin);
} }
@ -1385,6 +1408,7 @@ bool Monitor::Analyse()
if ( static_undef ) if ( static_undef )
{ {
// Sure would be nice to be able to assume that these were already initialized. It's just 1 compare/branch, but really not neccessary.
static_undef = false; static_undef = false;
timestamps = new struct timeval *[pre_event_count]; timestamps = new struct timeval *[pre_event_count];
images = new Image *[pre_event_count]; images = new Image *[pre_event_count];
@ -1395,6 +1419,10 @@ bool Monitor::Analyse()
{ {
bool signal = shared_data->signal; bool signal = shared_data->signal;
bool signal_change = (signal != last_signal); bool signal_change = (signal != last_signal);
//Set video recording flag for event start constructor and easy reference in code
// TODO: Use enum instead of the # 2. Makes for easier reading
bool videoRecording = ((GetOptVideoWriter() == 2) && camera->SupportsNativeVideo());
if ( trigger_data->trigger_state != TRIGGER_OFF ) if ( trigger_data->trigger_state != TRIGGER_OFF )
{ {
unsigned int score = 0; unsigned int score = 0;
@ -1506,10 +1534,13 @@ bool Monitor::Analyse()
if ( noteSet.size() > 0 ) if ( noteSet.size() > 0 )
noteSetMap[LINKED_CAUSE] = noteSet; noteSetMap[LINKED_CAUSE] = noteSet;
} }
//TODO: What happens is the event closes and sets recording to false then recording to true again so quickly that our capture daemon never picks it up. Maybe need a refresh flag?
if ( (!signal_change && signal) && (function == RECORD || function == MOCORD) ) if ( (!signal_change && signal) && (function == RECORD || function == MOCORD) )
{ {
if ( event ) if ( event )
{ {
//TODO: We shouldn't have to do this every time. Not sure why it clears itself if this isn't here??
snprintf(video_store_data->event_file, sizeof(video_store_data->event_file), "%s", event->getEventFile());
int section_mod = timestamp->tv_sec%section_length; int section_mod = timestamp->tv_sec%section_length;
if ( section_mod < last_section_mod ) if ( section_mod < last_section_mod )
{ {
@ -1535,8 +1566,11 @@ bool Monitor::Analyse()
{ {
// Create event // Create event
event = new Event( this, *timestamp, "Continuous", noteSetMap ); event = new Event( this, *timestamp, "Continuous", noteSetMap, videoRecording );
shared_data->last_event = event->Id(); shared_data->last_event = event->Id();
//set up video store data
snprintf(video_store_data->event_file, sizeof(video_store_data->event_file), "%s", event->getEventFile());
video_store_data->recording = true;
Info( "%s: %03d - Opening new event %d, section start", name, image_count, event->Id() ); Info( "%s: %03d - Opening new event %d, section start", name, image_count, event->Id() );
@ -1656,6 +1690,9 @@ bool Monitor::Analyse()
event = new Event( this, *(image_buffer[pre_index].timestamp), cause, noteSetMap ); event = new Event( this, *(image_buffer[pre_index].timestamp), cause, noteSetMap );
} }
shared_data->last_event = event->Id(); shared_data->last_event = event->Id();
//set up video store data
snprintf(video_store_data->event_file, sizeof(video_store_data->event_file), "%s", event->getEventFile());
video_store_data->recording = true;
Info( "%s: %03d - Opening new event %d, alarm start", name, image_count, event->Id() ); Info( "%s: %03d - Opening new event %d, alarm start", name, image_count, event->Id() );
@ -1802,6 +1839,11 @@ bool Monitor::Analyse()
} }
else if ( state == TAPE ) else if ( state == TAPE )
{ {
//Video Storage: activate only for supported cameras. Event::AddFrame knows whether or not we are recording video and saves frames accordingly
if((GetOptVideoWriter() == 2) && camera->SupportsNativeVideo())
{
video_store_data->recording = true;
}
if ( !(image_count%(frame_skip+1)) ) if ( !(image_count%(frame_skip+1)) )
{ {
if ( config.bulk_frame_interval > 1 ) if ( config.bulk_frame_interval > 1 )
@ -1865,6 +1907,7 @@ void Monitor::Reload()
closeEvent(); closeEvent();
static char sql[ZM_SQL_MED_BUFSIZ]; static char sql[ZM_SQL_MED_BUFSIZ];
// This seems to have fallen out of date.
snprintf( sql, sizeof(sql), "select Function+0, Enabled, LinkedMonitors, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, WarmupCount, PreEventCount, PostEventCount, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, SignalCheckColour from Monitors where Id = '%d'", id ); snprintf( sql, sizeof(sql), "select Function+0, Enabled, LinkedMonitors, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, WarmupCount, PreEventCount, PostEventCount, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, SignalCheckColour from Monitors where Id = '%d'", id );
if ( mysql_query( &dbconn, sql ) ) if ( mysql_query( &dbconn, sql ) )
@ -2057,7 +2100,7 @@ void Monitor::ReloadLinkedMonitors( const char *p_linked_monitors )
#if ZM_HAS_V4L #if ZM_HAS_V4L
int Monitor::LoadLocalMonitors( const char *device, Monitor **&monitors, Purpose purpose ) int Monitor::LoadLocalMonitors( const char *device, Monitor **&monitors, Purpose purpose )
{ {
std::string sql = "select Id, Name, ServerId, Function+0, Enabled, LinkedMonitors, Device, Channel, Format, V4LMultiBuffer, V4LCapturesPerFrame, Method, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, SignalCheckColour, Exif from Monitors where Function != 'None' and Type = 'Local'"; std::string sql = "select Id, Name, ServerId, Function+0, Enabled, LinkedMonitors, Device, Channel, Format, V4LMultiBuffer, V4LCapturesPerFrame, Method, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, SaveJPEGs, VideoWriter, EncoderParameters, RecordAudio, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, SignalCheckColour, Exif from Monitors where Function != 'None' and Type = 'Local'";
if ( device[0] ) { if ( device[0] ) {
sql += " AND Device='"; sql += " AND Device='";
sql += device; sql += device;
@ -2117,6 +2160,12 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
int palette = atoi(dbrow[col]); col++; int palette = atoi(dbrow[col]); col++;
Orientation orientation = (Orientation)atoi(dbrow[col]); col++; Orientation orientation = (Orientation)atoi(dbrow[col]); col++;
unsigned int deinterlacing = atoi(dbrow[col]); col++; unsigned int deinterlacing = atoi(dbrow[col]); col++;
int savejpegs = atoi(dbrow[col]); col++;
int videowriter = atoi(dbrow[col]); col++;
std::string encoderparams = dbrow[col]; col++;
bool record_audio = (*dbrow[col] != '0'); col++;
int brightness = atoi(dbrow[col]); col++; int brightness = atoi(dbrow[col]); col++;
int contrast = atoi(dbrow[col]); col++; int contrast = atoi(dbrow[col]); col++;
int hue = atoi(dbrow[col]); col++; int hue = atoi(dbrow[col]); col++;
@ -2174,6 +2223,7 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
hue, hue,
colour, colour,
purpose==CAPTURE, purpose==CAPTURE,
record_audio,
extras extras
); );
@ -2187,6 +2237,10 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
camera, camera,
orientation, orientation,
deinterlacing, deinterlacing,
savejpegs,
videowriter,
encoderparams,
record_audio,
event_prefix, event_prefix,
label_format, label_format,
Coord( label_x, label_y ), Coord( label_x, label_y ),
@ -2234,7 +2288,7 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
int Monitor::LoadRemoteMonitors( const char *protocol, const char *host, const char *port, const char *path, Monitor **&monitors, Purpose purpose ) int Monitor::LoadRemoteMonitors( const char *protocol, const char *host, const char *port, const char *path, Monitor **&monitors, Purpose purpose )
{ {
std::string sql = "select Id, Name, ServerId, Function+0, Enabled, LinkedMonitors, Protocol, Method, Host, Port, Path, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, RTSPDescribe, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, Exif from Monitors where Function != 'None' and Type = 'Remote'"; std::string sql = "select Id, Name, ServerId, Function+0, Enabled, LinkedMonitors, Protocol, Method, Host, Port, Path, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, RTSPDescribe, SaveJPEGs, VideoWriter, EncoderParameters, RecordAudio, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, Exif from Monitors where Function != 'None' and Type = 'Remote'";
if ( staticConfig.SERVER_ID ) { if ( staticConfig.SERVER_ID ) {
sql += stringtf( " AND ServerId=%d", staticConfig.SERVER_ID ); sql += stringtf( " AND ServerId=%d", staticConfig.SERVER_ID );
} }
@ -2277,6 +2331,11 @@ int Monitor::LoadRemoteMonitors( const char *protocol, const char *host, const c
Orientation orientation = (Orientation)atoi(dbrow[col]); col++; Orientation orientation = (Orientation)atoi(dbrow[col]); col++;
unsigned int deinterlacing = atoi(dbrow[col]); col++; unsigned int deinterlacing = atoi(dbrow[col]); col++;
bool rtsp_describe = (*dbrow[col] != '0'); col++; bool rtsp_describe = (*dbrow[col] != '0'); col++;
int savejpegs = atoi(dbrow[col]); col++;
int videowriter = atoi(dbrow[col]); col++;
std::string encoderparams = dbrow[col]; col++;
bool record_audio = (*dbrow[col] != '0'); col++;
int brightness = atoi(dbrow[col]); col++; int brightness = atoi(dbrow[col]); col++;
int contrast = atoi(dbrow[col]); col++; int contrast = atoi(dbrow[col]); col++;
int hue = atoi(dbrow[col]); col++; int hue = atoi(dbrow[col]); col++;
@ -2324,7 +2383,8 @@ int Monitor::LoadRemoteMonitors( const char *protocol, const char *host, const c
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
} }
#if HAVE_LIBAVFORMAT #if HAVE_LIBAVFORMAT
@ -2344,7 +2404,8 @@ int Monitor::LoadRemoteMonitors( const char *protocol, const char *host, const c
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
} }
#endif // HAVE_LIBAVFORMAT #endif // HAVE_LIBAVFORMAT
@ -2363,6 +2424,10 @@ int Monitor::LoadRemoteMonitors( const char *protocol, const char *host, const c
camera, camera,
orientation, orientation,
deinterlacing, deinterlacing,
savejpegs,
videowriter,
encoderparams,
record_audio,
event_prefix.c_str(), event_prefix.c_str(),
label_format.c_str(), label_format.c_str(),
Coord( label_x, label_y ), Coord( label_x, label_y ),
@ -2410,7 +2475,7 @@ int Monitor::LoadRemoteMonitors( const char *protocol, const char *host, const c
int Monitor::LoadFileMonitors( const char *file, Monitor **&monitors, Purpose purpose ) int Monitor::LoadFileMonitors( const char *file, Monitor **&monitors, Purpose purpose )
{ {
std::string sql = "select Id, Name, ServerId, Function+0, Enabled, LinkedMonitors, Path, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, Exif from Monitors where Function != 'None' and Type = 'File'"; std::string sql = "select Id, Name, ServerId, Function+0, Enabled, LinkedMonitors, Path, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, SaveJPEGs, VideoWriter, EncoderParameters, RecordAudio, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, Exif from Monitors where Function != 'None' and Type = 'File'";
if ( file[0] ) { if ( file[0] ) {
sql += " AND Path='"; sql += " AND Path='";
sql += file; sql += file;
@ -2449,6 +2514,12 @@ int Monitor::LoadFileMonitors( const char *file, Monitor **&monitors, Purpose pu
/* int palette = atoi(dbrow[col]); */ col++; /* int palette = atoi(dbrow[col]); */ col++;
Orientation orientation = (Orientation)atoi(dbrow[col]); col++; Orientation orientation = (Orientation)atoi(dbrow[col]); col++;
unsigned int deinterlacing = atoi(dbrow[col]); col++; unsigned int deinterlacing = atoi(dbrow[col]); col++;
int savejpegs = atoi(dbrow[col]); col++;
int videowriter = atoi(dbrow[col]); col++;
std::string encoderparams = dbrow[col]; col++;
bool record_audio = (*dbrow[col] != '0'); col++;
int brightness = atoi(dbrow[col]); col++; int brightness = atoi(dbrow[col]); col++;
int contrast = atoi(dbrow[col]); col++; int contrast = atoi(dbrow[col]); col++;
int hue = atoi(dbrow[col]); col++; int hue = atoi(dbrow[col]); col++;
@ -2490,7 +2561,8 @@ int Monitor::LoadFileMonitors( const char *file, Monitor **&monitors, Purpose pu
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
monitors[i] = new Monitor( monitors[i] = new Monitor(
@ -2503,6 +2575,10 @@ int Monitor::LoadFileMonitors( const char *file, Monitor **&monitors, Purpose pu
camera, camera,
orientation, orientation,
deinterlacing, deinterlacing,
savejpegs,
videowriter,
encoderparams,
record_audio,
event_prefix, event_prefix,
label_format, label_format,
Coord( label_x, label_y ), Coord( label_x, label_y ),
@ -2550,7 +2626,7 @@ int Monitor::LoadFileMonitors( const char *file, Monitor **&monitors, Purpose pu
#if HAVE_LIBAVFORMAT #if HAVE_LIBAVFORMAT
int Monitor::LoadFfmpegMonitors( const char *file, Monitor **&monitors, Purpose purpose ) int Monitor::LoadFfmpegMonitors( const char *file, Monitor **&monitors, Purpose purpose )
{ {
std::string sql = "select Id, Name, ServerId, Function+0, Enabled, LinkedMonitors, Path, Method, Options, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, Exif from Monitors where Function != 'None' and Type = 'Ffmpeg'"; std::string sql = "select Id, Name, ServerId, Function+0, Enabled, LinkedMonitors, Path, Method, Options, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, SaveJPEGs, VideoWriter, EncoderParameters, RecordAudio, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, Exif from Monitors where Function != 'None' and Type = 'Ffmpeg'";
if ( file[0] ) { if ( file[0] ) {
sql += " AND Path = '"; sql += " AND Path = '";
sql += file; sql += file;
@ -2591,6 +2667,12 @@ int Monitor::LoadFfmpegMonitors( const char *file, Monitor **&monitors, Purpose
/* int palette = atoi(dbrow[col]); */ col++; /* int palette = atoi(dbrow[col]); */ col++;
Orientation orientation = (Orientation)atoi(dbrow[col]); col++; Orientation orientation = (Orientation)atoi(dbrow[col]); col++;
unsigned int deinterlacing = atoi(dbrow[col]); col++; unsigned int deinterlacing = atoi(dbrow[col]); col++;
int savejpegs = atoi(dbrow[col]); col++;
int videowriter = atoi(dbrow[col]); col++;
std::string encoderparams = dbrow[col]; col++;
bool record_audio = (*dbrow[col] != '0'); col++;
int brightness = atoi(dbrow[col]); col++; int brightness = atoi(dbrow[col]); col++;
int contrast = atoi(dbrow[col]); col++; int contrast = atoi(dbrow[col]); col++;
int hue = atoi(dbrow[col]); col++; int hue = atoi(dbrow[col]); col++;
@ -2634,7 +2716,8 @@ int Monitor::LoadFfmpegMonitors( const char *file, Monitor **&monitors, Purpose
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
monitors[i] = new Monitor( monitors[i] = new Monitor(
@ -2647,6 +2730,10 @@ int Monitor::LoadFfmpegMonitors( const char *file, Monitor **&monitors, Purpose
camera, camera,
orientation, orientation,
deinterlacing, deinterlacing,
savejpegs,
videowriter,
encoderparams,
record_audio,
event_prefix, event_prefix,
label_format, label_format,
Coord( label_x, label_y ), Coord( label_x, label_y ),
@ -2694,7 +2781,7 @@ int Monitor::LoadFfmpegMonitors( const char *file, Monitor **&monitors, Purpose
Monitor *Monitor::Load( unsigned int p_id, bool load_zones, Purpose purpose ) Monitor *Monitor::Load( unsigned int p_id, bool load_zones, Purpose purpose )
{ {
std::string sql = stringtf( "select Id, Name, ServerId, Type, Function+0, Enabled, LinkedMonitors, Device, Channel, Format, V4LMultiBuffer, V4LCapturesPerFrame, Protocol, Method, Host, Port, Path, Options, User, Pass, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, RTSPDescribe, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, SignalCheckColour, Exif from Monitors where Id = %d", p_id ); std::string sql = stringtf( "select Id, Name, ServerId, Type, Function+0, Enabled, LinkedMonitors, Device, Channel, Format, V4LMultiBuffer, V4LCapturesPerFrame, Protocol, Method, Host, Port, Path, Options, User, Pass, Width, Height, Colours, Palette, Orientation+0, Deinterlacing, RTSPDescribe, SaveJPEGs, VideoWriter, EncoderParameters, RecordAudio, Brightness, Contrast, Hue, Colour, EventPrefix, LabelFormat, LabelX, LabelY, LabelSize, ImageBufferCount, WarmupCount, PreEventCount, PostEventCount, StreamReplayBuffer, AlarmFrameCount, SectionLength, FrameSkip, MotionFrameSkip, AnalysisFPS, AnalysisUpdateDelay, MaxFPS, AlarmMaxFPS, FPSReportInterval, RefBlendPerc, AlarmRefBlendPerc, TrackMotion, SignalCheckColour, Exif from Monitors where Id = %d", p_id );
MYSQL_ROW dbrow = zmDbFetchOne( sql.c_str() ); MYSQL_ROW dbrow = zmDbFetchOne( sql.c_str() );
if ( ! dbrow ) { if ( ! dbrow ) {
@ -2751,6 +2838,11 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
Orientation orientation = (Orientation)atoi(dbrow[col]); col++; Orientation orientation = (Orientation)atoi(dbrow[col]); col++;
unsigned int deinterlacing = atoi(dbrow[col]); col++; unsigned int deinterlacing = atoi(dbrow[col]); col++;
bool rtsp_describe = (*dbrow[col] != '0'); col++; bool rtsp_describe = (*dbrow[col] != '0'); col++;
int savejpegs = atoi(dbrow[col]); col++;
int videowriter = atoi(dbrow[col]); col++;
std::string encoderparams = dbrow[col]; col++;
bool record_audio = (*dbrow[col] != '0'); col++;
int brightness = atoi(dbrow[col]); col++; int brightness = atoi(dbrow[col]); col++;
int contrast = atoi(dbrow[col]); col++; int contrast = atoi(dbrow[col]); col++;
int hue = atoi(dbrow[col]); col++; int hue = atoi(dbrow[col]); col++;
@ -2812,6 +2904,7 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
hue, hue,
colour, colour,
purpose==CAPTURE, purpose==CAPTURE,
record_audio,
extras extras
); );
#else // ZM_HAS_V4L #else // ZM_HAS_V4L
@ -2835,7 +2928,8 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
} }
else if ( protocol == "rtsp" ) else if ( protocol == "rtsp" )
@ -2855,7 +2949,8 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
#else // HAVE_LIBAVFORMAT #else // HAVE_LIBAVFORMAT
Fatal( "You must have ffmpeg libraries installed to use remote camera protocol '%s' for monitor %d", protocol.c_str(), id ); Fatal( "You must have ffmpeg libraries installed to use remote camera protocol '%s' for monitor %d", protocol.c_str(), id );
@ -2878,7 +2973,8 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
} }
else if ( type == "Ffmpeg" ) else if ( type == "Ffmpeg" )
@ -2896,7 +2992,8 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
#else // HAVE_LIBAVFORMAT #else // HAVE_LIBAVFORMAT
Fatal( "You must have ffmpeg libraries installed to use ffmpeg cameras for monitor %d", id ); Fatal( "You must have ffmpeg libraries installed to use ffmpeg cameras for monitor %d", id );
@ -2917,7 +3014,8 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
#else // HAVE_LIBVLC #else // HAVE_LIBVLC
Fatal( "You must have vlc libraries installed to use vlc cameras for monitor %d", id ); Fatal( "You must have vlc libraries installed to use vlc cameras for monitor %d", id );
@ -2938,7 +3036,8 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
contrast, contrast,
hue, hue,
colour, colour,
purpose==CAPTURE purpose==CAPTURE,
record_audio
); );
#else // HAVE_LIBCURL #else // HAVE_LIBCURL
Fatal( "You must have libcurl installed to use ffmpeg cameras for monitor %d", id ); Fatal( "You must have libcurl installed to use ffmpeg cameras for monitor %d", id );
@ -2958,6 +3057,10 @@ Debug( 1, "Got %d for v4l_captures_per_frame", v4l_captures_per_frame );
camera, camera,
orientation, orientation,
deinterlacing, deinterlacing,
savejpegs,
videowriter,
encoderparams,
record_audio,
event_prefix.c_str(), event_prefix.c_str(),
label_format.c_str(), label_format.c_str(),
Coord( label_x, label_y ), Coord( label_x, label_y ),
@ -3014,7 +3117,13 @@ int Monitor::Capture()
} }
/* Capture a new next image */ /* Capture a new next image */
captureResult = camera->Capture(*(next_buffer.image));
//Check if FFMPEG camera
if((GetOptVideoWriter() == 2) && camera->SupportsNativeVideo()){
captureResult = camera->CaptureAndRecord(*(next_buffer.image), video_store_data->recording, video_store_data->event_file);
}else{
captureResult = camera->Capture(*(next_buffer.image));
}
if ( FirstCapture ) { if ( FirstCapture ) {
FirstCapture = 0; FirstCapture = 0;
@ -3022,10 +3131,22 @@ int Monitor::Capture()
} }
} else { } else {
/* Capture directly into image buffer, avoiding the need to memcpy() */ //Check if FFMPEG camera
captureResult = camera->Capture(*capture_image); if((GetOptVideoWriter() == 2) && camera->SupportsNativeVideo()){
//Warning("ZMC: Recording: %d", video_store_data->recording);
captureResult = camera->CaptureAndRecord(*capture_image, video_store_data->recording, video_store_data->event_file);
}else{
/* Capture directly into image buffer, avoiding the need to memcpy() */
captureResult = camera->Capture(*capture_image);
}
} }
if((GetOptVideoWriter() == 2) && captureResult > 0)
{
//video_store_data->frameNumber = captureResult;
captureResult = 0;
}
if ( captureResult != 0 ) if ( captureResult != 0 )
{ {
// Unable to capture image for temporary reason // Unable to capture image for temporary reason
@ -3079,8 +3200,9 @@ int Monitor::Capture()
} }
} }
} }
} // end if captureResults == 1
} // if true? let's get rid of this.
if ( true ) { if ( true ) {
if ( capture_image->Size() > camera->ImageSize() ) if ( capture_image->Size() > camera->ImageSize() )
@ -3192,15 +3314,15 @@ void Monitor::TimestampImage( Image *ts_image, const struct timeval *ts_time ) c
} }
} }
bool Monitor::closeEvent() bool Monitor::closeEvent() {
{ if (event)
if ( event )
{ {
if ( function == RECORD || function == MOCORD ) if ( function == RECORD || function == MOCORD )
{ {
gettimeofday( &(event->EndTime()), NULL ); gettimeofday( &(event->EndTime()), NULL );
} }
delete event; delete event;
video_store_data->recording = false;
event = 0; event = 0;
return( true ); return( true );
} }
@ -3464,7 +3586,7 @@ bool Monitor::DumpSettings( char *output, bool verbose )
zones[i]->DumpSettings( output+strlen(output), verbose ); zones[i]->DumpSettings( output+strlen(output), verbose );
} }
return( true ); return( true );
} } // bool Monitor::DumpSettings( char *output, bool verbose )
bool MonitorStream::checkSwapPath( const char *path, bool create_path ) bool MonitorStream::checkSwapPath( const char *path, bool create_path )
{ {
@ -4342,3 +4464,16 @@ void Monitor::SingleImageZip( int scale)
fprintf( stdout, "Content-Type: image/x-rgbz\r\n\r\n" ); fprintf( stdout, "Content-Type: image/x-rgbz\r\n\r\n" );
fwrite( img_buffer, img_buffer_size, 1, stdout ); fwrite( img_buffer, img_buffer_size, 1, stdout );
} }
unsigned int Monitor::Colours() const { return( camera->Colours() ); }
unsigned int Monitor::SubpixelOrder() const { return( camera->SubpixelOrder() ); }
int Monitor::PrimeCapture() {
return( camera->PrimeCapture() );
}
int Monitor::PreCapture() {
return( camera->PreCapture() );
}
int Monitor::PostCapture() {
return( camera->PostCapture() );
}
Monitor::Orientation Monitor::getOrientation()const { return orientation; }

View File

@ -29,6 +29,7 @@
#include "zm_rgb.h" #include "zm_rgb.h"
#include "zm_zone.h" #include "zm_zone.h"
#include "zm_event.h" #include "zm_event.h"
class Monitor;
#include "zm_camera.h" #include "zm_camera.h"
#include "zm_utils.h" #include "zm_utils.h"
@ -153,6 +154,19 @@ protected:
void* padding; void* padding;
}; };
//TODO: Technically we can't exclude this struct when people don't have avformat as the Memory.pm module doesn't know about avformat
#if 1
//sizeOf(VideoStoreData) expected to be 4104 bytes on 32bit and 64bit
typedef struct
{
uint32_t size;
char event_file[4096];
uint32_t recording; //bool arch dependent so use uint32 instead
//uint32_t frameNumber;
} VideoStoreData;
#endif // HAVE_LIBAVFORMAT
class MonitorLink class MonitorLink
{ {
protected: protected:
@ -173,6 +187,7 @@ protected:
volatile SharedData *shared_data; volatile SharedData *shared_data;
volatile TriggerData *trigger_data; volatile TriggerData *trigger_data;
volatile VideoStoreData *video_store_data;
int last_state; int last_state;
int last_event; int last_event;
@ -220,6 +235,13 @@ protected:
unsigned int v4l_captures_per_frame; unsigned int v4l_captures_per_frame;
Orientation orientation; // Whether the image has to be rotated at all Orientation orientation; // Whether the image has to be rotated at all
unsigned int deinterlacing; unsigned int deinterlacing;
int savejpegspref;
int videowriterpref;
std::string encoderparams;
std::vector<EncoderParameter_t> encoderparamsvec;
bool record_audio; // Whether to store the audio that we receive
int brightness; // The statically saved brightness of the camera int brightness; // The statically saved brightness of the camera
int contrast; // The statically saved contrast of the camera int contrast; // The statically saved contrast of the camera
int hue; // The statically saved hue of the camera int hue; // The statically saved hue of the camera
@ -284,6 +306,7 @@ protected:
SharedData *shared_data; SharedData *shared_data;
TriggerData *trigger_data; TriggerData *trigger_data;
VideoStoreData *video_store_data;
Snapshot *image_buffer; Snapshot *image_buffer;
Snapshot next_buffer; /* Used by four field deinterlacing */ Snapshot next_buffer; /* Used by four field deinterlacing */
@ -305,9 +328,50 @@ protected:
MonitorLink **linked_monitors; MonitorLink **linked_monitors;
public: public:
Monitor( int p_id );
// OurCheckAlarms seems to be unused. Check it on zm_monitor.cpp for more info. // OurCheckAlarms seems to be unused. Check it on zm_monitor.cpp for more info.
//bool OurCheckAlarms( Zone *zone, const Image *pImage ); //bool OurCheckAlarms( Zone *zone, const Image *pImage );
Monitor( int p_id, const char *p_name, unsigned int p_server_id, int p_function, bool p_enabled, const char *p_linked_monitors, Camera *p_camera, int p_orientation, unsigned int p_deinterlacing, const char *p_event_prefix, const char *p_label_format, const Coord &p_label_coord, int label_size, int p_image_buffer_count, int p_warmup_count, int p_pre_event_count, int p_post_event_count, int p_stream_replay_buffer, int p_alarm_frame_count, int p_section_length, int p_frame_skip, int p_motion_frame_skip, double p_analysis_fps, unsigned int p_analysis_update_delay, int p_capture_delay, int p_alarm_capture_delay, int p_fps_report_interval, int p_ref_blend_perc, int p_alarm_ref_blend_perc, bool p_track_motion, Rgb p_signal_check_colour, bool p_embed_exif, Purpose p_purpose, int p_n_zones=0, Zone *p_zones[]=0 ); Monitor(
int p_id,
const char *p_name,
unsigned int p_server_id,
int p_function,
bool p_enabled,
const char *p_linked_monitors,
Camera *p_camera,
int p_orientation,
unsigned int p_deinterlacing,
int p_savejpegs,
int p_videowriter,
std::string p_encoderparams,
bool p_record_audio,
const char *p_event_prefix,
const char *p_label_format,
const Coord &p_label_coord,
int label_size,
int p_image_buffer_count,
int p_warmup_count,
int p_pre_event_count,
int p_post_event_count,
int p_stream_replay_buffer,
int p_alarm_frame_count,
int p_section_length,
int p_frame_skip,
int p_motion_frame_skip,
double p_analysis_fps,
unsigned int p_analysis_update_delay,
int p_capture_delay,
int p_alarm_capture_delay,
int p_fps_report_interval,
int p_ref_blend_perc,
int p_alarm_ref_blend_perc,
bool p_track_motion,
Rgb p_signal_check_colour,
bool p_embed_exif,
Purpose p_purpose,
int p_n_zones=0,
Zone *p_zones[]=0
);
~Monitor(); ~Monitor();
void AddZones( int p_n_zones, Zone *p_zones[] ); void AddZones( int p_n_zones, Zone *p_zones[] );
@ -357,12 +421,16 @@ public:
{ {
return( embed_exif ); return( embed_exif );
} }
Orientation getOrientation() const;
unsigned int Width() const { return( width ); } unsigned int Width() const { return width; }
unsigned int Height() const { return( height ); } unsigned int Height() const { return height; }
unsigned int Colours() const { return( camera->Colours() ); } unsigned int Colours() const;
unsigned int SubpixelOrder() const { return( camera->SubpixelOrder() ); } unsigned int SubpixelOrder() const;
int GetOptSaveJPEGs() const { return( savejpegspref ); }
int GetOptVideoWriter() const { return( videowriterpref ); }
const std::vector<EncoderParameter_t>* GetOptEncoderParams() const { return( &encoderparamsvec ); }
State GetState() const; State GetState() const;
int GetImage( int index=-1, int scale=100 ); int GetImage( int index=-1, int scale=100 );
@ -392,19 +460,10 @@ public:
int actionColour( int p_colour=-1 ); int actionColour( int p_colour=-1 );
int actionContrast( int p_contrast=-1 ); int actionContrast( int p_contrast=-1 );
inline int PrimeCapture() int PrimeCapture();
{ int PreCapture();
return( camera->PrimeCapture() );
}
inline int PreCapture()
{
return( camera->PreCapture() );
}
int Capture(); int Capture();
int PostCapture() int PostCapture();
{
return( camera->PostCapture() );
}
unsigned int DetectMotion( const Image &comp_image, Event::StringSet &zoneSet ); unsigned int DetectMotion( const Image &comp_image, Event::StringSet &zoneSet );
// DetectBlack seems to be unused. Check it on zm_monitor.cpp for more info. // DetectBlack seems to be unused. Check it on zm_monitor.cpp for more info.

View File

@ -21,13 +21,28 @@
#include "zm_utils.h" #include "zm_utils.h"
RemoteCamera::RemoteCamera( int p_id, const std::string &p_protocol, const std::string &p_host, const std::string &p_port, const std::string &p_path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ) : RemoteCamera::RemoteCamera(
Camera( p_id, REMOTE_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture ), unsigned int p_monitor_id,
protocol( p_protocol ), const std::string &p_protocol,
host( p_host ), const std::string &p_host,
port( p_port ), const std::string &p_port,
path( p_path ), const std::string &p_path,
hp( 0 ) int p_width,
int p_height,
int p_colours,
int p_brightness,
int p_contrast,
int p_hue,
int p_colour,
bool p_capture,
bool p_record_audio
) :
Camera( p_monitor_id, REMOTE_SRC, p_width, p_height, p_colours, ZM_SUBPIX_ORDER_DEFAULT_FOR_COLOUR(p_colours), p_brightness, p_contrast, p_hue, p_colour, p_capture, p_record_audio ),
protocol( p_protocol ),
host( p_host ),
port( p_port ),
path( p_path ),
hp( 0 )
{ {
if ( path[0] != '/' ) if ( path[0] != '/' )
path = '/'+path; path = '/'+path;

View File

@ -27,6 +27,7 @@
#include <sys/types.h> #include <sys/types.h>
#include <sys/socket.h> #include <sys/socket.h>
#include <netdb.h> #include <netdb.h>
#include <arpa/inet.h>
// //
// Class representing 'remote' cameras, i.e. those which are // Class representing 'remote' cameras, i.e. those which are
@ -55,7 +56,22 @@ protected:
struct addrinfo *hp; struct addrinfo *hp;
public: public:
RemoteCamera( int p_id, const std::string &p_proto, const std::string &p_host, const std::string &p_port, const std::string &p_path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ); RemoteCamera(
unsigned int p_monitor_id,
const std::string &p_proto,
const std::string &p_host,
const std::string &p_port,
const std::string &p_path,
int p_width,
int p_height,
int p_colours,
int p_brightness,
int p_contrast,
int p_hue,
int p_colour,
bool p_capture,
bool p_record_audio
);
virtual ~RemoteCamera(); virtual ~RemoteCamera();
const std::string &Protocol() const { return( protocol ); } const std::string &Protocol() const { return( protocol ); }
@ -73,6 +89,7 @@ public:
virtual int PreCapture() = 0; virtual int PreCapture() = 0;
virtual int Capture( Image &image ) = 0; virtual int Capture( Image &image ) = 0;
virtual int PostCapture() = 0; virtual int PostCapture() = 0;
virtual int CaptureAndRecord( Image &image, bool recording, char* event_directory )=0;
}; };
#endif // ZM_REMOTE_CAMERA_H #endif // ZM_REMOTE_CAMERA_H

View File

@ -30,9 +30,39 @@
#ifdef SOLARIS #ifdef SOLARIS
#include <sys/filio.h> // FIONREAD and friends #include <sys/filio.h> // FIONREAD and friends
#endif #endif
#ifdef __FreeBSD__
#include <netinet/in.h>
#endif
RemoteCameraHttp::RemoteCameraHttp( int p_id, const std::string &p_method, const std::string &p_host, const std::string &p_port, const std::string &p_path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ) : RemoteCameraHttp::RemoteCameraHttp(
RemoteCamera( p_id, "http", p_host, p_port, p_path, p_width, p_height, p_colours, p_brightness, p_contrast, p_hue, p_colour, p_capture ) unsigned int p_monitor_id,
const std::string &p_method,
const std::string &p_host,
const std::string &p_port,
const std::string &p_path,
int p_width, int p_height,
int p_colours,
int p_brightness,
int p_contrast,
int p_hue,
int p_colour,
bool p_capture,
bool p_record_audio ) :
RemoteCamera(
p_monitor_id,
"http",
p_host,
p_port,
p_path,
p_width,
p_height,
p_colours,
p_brightness,
p_contrast,
p_hue,
p_colour,
p_capture,
p_record_audio )
{ {
sd = -1; sd = -1;
@ -44,7 +74,7 @@ RemoteCameraHttp::RemoteCameraHttp( int p_id, const std::string &p_method, const
else if ( p_method == "regexp" ) else if ( p_method == "regexp" )
method = REGEXP; method = REGEXP;
else else
Fatal( "Unrecognised method '%s' when creating HTTP camera %d", p_method.c_str(), id ); Fatal( "Unrecognised method '%s' when creating HTTP camera %d", p_method.c_str(), monitor_id );
if ( capture ) if ( capture )
{ {
Initialise(); Initialise();
@ -108,7 +138,12 @@ int RemoteCameraHttp::Connect()
{ {
close(sd); close(sd);
sd = -1; sd = -1;
Warning("Can't connect to remote camera: %s", strerror(errno) ); char buf[sizeof(struct in6_addr)];
struct sockaddr_in *addr;
addr = (struct sockaddr_in *)p->ai_addr;
inet_ntop( AF_INET, &(addr->sin_addr), buf, INET6_ADDRSTRLEN );
Warning("Can't connect to remote camera mid: %d at %s: %s", monitor_id, buf, strerror(errno) );
continue; continue;
} }

View File

@ -45,7 +45,7 @@ protected:
enum { SIMPLE, REGEXP } method; enum { SIMPLE, REGEXP } method;
public: public:
RemoteCameraHttp( int p_id, const std::string &method, const std::string &host, const std::string &port, const std::string &path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ); RemoteCameraHttp( unsigned int p_monitor_id, const std::string &method, const std::string &host, const std::string &port, const std::string &path, int p_width, int p_height, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio );
~RemoteCameraHttp(); ~RemoteCameraHttp();
void Initialise(); void Initialise();
@ -58,6 +58,7 @@ public:
int PreCapture(); int PreCapture();
int Capture( Image &image ); int Capture( Image &image );
int PostCapture(); int PostCapture();
int CaptureAndRecord( Image &image, bool recording, char* event_directory ) {return(0);};
}; };
#endif // ZM_REMOTE_CAMERA_HTTP_H #endif // ZM_REMOTE_CAMERA_HTTP_H

View File

@ -28,8 +28,8 @@
#include <sys/types.h> #include <sys/types.h>
#include <sys/socket.h> #include <sys/socket.h>
RemoteCameraRtsp::RemoteCameraRtsp( int p_id, const std::string &p_method, const std::string &p_host, const std::string &p_port, const std::string &p_path, int p_width, int p_height, bool p_rtsp_describe, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ) : RemoteCameraRtsp::RemoteCameraRtsp( unsigned int p_monitor_id, const std::string &p_method, const std::string &p_host, const std::string &p_port, const std::string &p_path, int p_width, int p_height, bool p_rtsp_describe, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio ) :
RemoteCamera( p_id, "rtsp", p_host, p_port, p_path, p_width, p_height, p_colours, p_brightness, p_contrast, p_hue, p_colour, p_capture ), RemoteCamera( p_monitor_id, "rtsp", p_host, p_port, p_path, p_width, p_height, p_colours, p_brightness, p_contrast, p_hue, p_colour, p_capture, p_record_audio ),
rtsp_describe( p_rtsp_describe ), rtsp_describe( p_rtsp_describe ),
rtspThread( 0 ) rtspThread( 0 )
@ -43,7 +43,7 @@ RemoteCameraRtsp::RemoteCameraRtsp( int p_id, const std::string &p_method, const
else if ( p_method == "rtpRtspHttp" ) else if ( p_method == "rtpRtspHttp" )
method = RtspThread::RTP_RTSP_HTTP; method = RtspThread::RTP_RTSP_HTTP;
else else
Fatal( "Unrecognised method '%s' when creating RTSP camera %d", p_method.c_str(), id ); Fatal( "Unrecognised method '%s' when creating RTSP camera %d", p_method.c_str(), monitor_id );
if ( capture ) if ( capture )
{ {
@ -52,11 +52,14 @@ RemoteCameraRtsp::RemoteCameraRtsp( int p_id, const std::string &p_method, const
mFormatContext = NULL; mFormatContext = NULL;
mVideoStreamId = -1; mVideoStreamId = -1;
mAudioStreamId = -1;
mCodecContext = NULL; mCodecContext = NULL;
mCodec = NULL; mCodec = NULL;
mRawFrame = NULL; mRawFrame = NULL;
mFrame = NULL; mFrame = NULL;
frameCount = 0; frameCount = 0;
wasRecording = false;
startTime=0;
#if HAVE_LIBSWSCALE #if HAVE_LIBSWSCALE
mConvertContext = NULL; mConvertContext = NULL;
@ -113,6 +116,8 @@ void RemoteCameraRtsp::Initialise()
int max_size = width*height*colours; int max_size = width*height*colours;
// This allocates a buffer able to hold a raw fframe, which is a little artbitrary. Might be nice to get some
// decent data on how large a buffer is really needed.
buffer.size( max_size ); buffer.size( max_size );
if ( logDebugging() ) if ( logDebugging() )
@ -132,7 +137,7 @@ void RemoteCameraRtsp::Terminate()
int RemoteCameraRtsp::Connect() int RemoteCameraRtsp::Connect()
{ {
rtspThread = new RtspThread( id, method, protocol, host, port, path, auth, rtsp_describe ); rtspThread = new RtspThread( monitor_id, method, protocol, host, port, path, auth, rtsp_describe );
rtspThread->start(); rtspThread->start();
@ -168,7 +173,8 @@ int RemoteCameraRtsp::PrimeCapture()
// Find first video stream present // Find first video stream present
mVideoStreamId = -1; mVideoStreamId = -1;
for ( unsigned int i = 0; i < mFormatContext->nb_streams; i++ ) // Find the first video stream.
for ( unsigned int i = 0; i < mFormatContext->nb_streams; i++ ) {
#if (LIBAVCODEC_VERSION_CHECK(52, 64, 0, 64, 0) || LIBAVUTIL_VERSION_CHECK(50, 14, 0, 14, 0)) #if (LIBAVCODEC_VERSION_CHECK(52, 64, 0, 64, 0) || LIBAVUTIL_VERSION_CHECK(50, 14, 0, 14, 0))
if ( mFormatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO ) if ( mFormatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO )
#else #else
@ -178,6 +184,7 @@ int RemoteCameraRtsp::PrimeCapture()
mVideoStreamId = i; mVideoStreamId = i;
break; break;
} }
}
if ( mVideoStreamId == -1 ) if ( mVideoStreamId == -1 )
Fatal( "Unable to locate video stream" ); Fatal( "Unable to locate video stream" );
@ -265,7 +272,7 @@ int RemoteCameraRtsp::Capture( Image &image )
Error("Failed requesting writeable buffer for the captured image."); Error("Failed requesting writeable buffer for the captured image.");
return (-1); return (-1);
} }
while ( true ) while ( true )
{ {
buffer.clear(); buffer.clear();
@ -307,62 +314,238 @@ int RemoteCameraRtsp::Capture( Image &image )
} }
av_init_packet( &packet ); av_init_packet( &packet );
while ( !frameComplete && buffer.size() > 0 )
{
packet.data = buffer.head();
packet.size = buffer.size();
#if LIBAVCODEC_VERSION_CHECK(52, 23, 0, 23, 0)
int len = avcodec_decode_video2( mCodecContext, mRawFrame, &frameComplete, &packet );
#else
int len = avcodec_decode_video( mCodecContext, mRawFrame, &frameComplete, packet.data, packet.size );
#endif
if ( len < 0 )
{
Error( "Error while decoding frame %d", frameCount );
Hexdump( Logger::ERROR, buffer.head(), buffer.size()>256?256:buffer.size() );
buffer.clear();
continue;
}
Debug( 2, "Frame: %d - %d/%d", frameCount, len, buffer.size() );
//if ( buffer.size() < 400 )
//Hexdump( 0, buffer.head(), buffer.size() );
buffer -= len;
} while ( !frameComplete && buffer.size() > 0 ) {
if ( frameComplete ) { packet.data = buffer.head();
packet.size = buffer.size();
Debug( 3, "Got frame %d", frameCount );
// So I think this is the magic decode step. Result is a raw image?
#if LIBAVUTIL_VERSION_CHECK(54, 6, 0, 6, 0) #if LIBAVCODEC_VERSION_CHECK(52, 23, 0, 23, 0)
av_image_fill_arrays(mFrame->data, mFrame->linesize, int len = avcodec_decode_video2( mCodecContext, mRawFrame, &frameComplete, &packet );
directbuffer, imagePixFormat, width, height, 1);
#else #else
avpicture_fill( (AVPicture *)mFrame, directbuffer, int len = avcodec_decode_video( mCodecContext, mRawFrame, &frameComplete, packet.data, packet.size );
imagePixFormat, width, height);
#endif #endif
if ( len < 0 ) {
Error( "Error while decoding frame %d", frameCount );
Hexdump( Logger::ERROR, buffer.head(), buffer.size()>256?256:buffer.size() );
buffer.clear();
continue;
}
Debug( 2, "Frame: %d - %d/%d", frameCount, len, buffer.size() );
//if ( buffer.size() < 400 )
//Hexdump( 0, buffer.head(), buffer.size() );
buffer -= len;
}
// At this point, we either have a frame or ran out of buffer. What happens if we run out of buffer?
if ( frameComplete ) {
Debug( 3, "Got frame %d", frameCount );
avpicture_fill( (AVPicture *)mFrame, directbuffer, imagePixFormat, width, height );
#if HAVE_LIBSWSCALE
if(mConvertContext == NULL) {
mConvertContext = sws_getContext( mCodecContext->width, mCodecContext->height, mCodecContext->pix_fmt, width, height, imagePixFormat, SWS_BICUBIC, NULL, NULL, NULL );
if(mConvertContext == NULL)
Fatal( "Unable to create conversion context");
}
if ( sws_scale( mConvertContext, mRawFrame->data, mRawFrame->linesize, 0, mCodecContext->height, mFrame->data, mFrame->linesize ) < 0 )
Fatal( "Unable to convert raw format %u to target format %u at frame %d", mCodecContext->pix_fmt, imagePixFormat, frameCount );
#else // HAVE_LIBSWSCALE
Fatal( "You must compile ffmpeg with the --enable-swscale option to use RTSP cameras" );
#endif // HAVE_LIBSWSCALE
frameCount++;
} /* frame complete */
#if LIBAVCODEC_VERSION_CHECK(57, 8, 0, 12, 100)
av_packet_unref( &packet );
#else
av_free_packet( &packet );
#endif
} /* getFrame() */
if(frameComplete)
return (0);
} // end while true
// can never get here.
return (0) ;
}
//Function to handle capture and store
int RemoteCameraRtsp::CaptureAndRecord( Image &image, bool recording, char* event_file ) {
AVPacket packet;
uint8_t* directbuffer;
int frameComplete = false;
/* Request a writeable buffer of the target image */
directbuffer = image.WriteBuffer(width, height, colours, subpixelorder);
if(directbuffer == NULL) {
Error("Failed requesting writeable buffer for the captured image.");
return (-1);
}
while ( true ) {
buffer.clear();
if ( !rtspThread->isRunning() )
return (-1);
if ( rtspThread->getFrame( buffer ) ) {
Debug( 3, "Read frame %d bytes", buffer.size() );
Debug( 4, "Address %p", buffer.head() );
Hexdump( 4, buffer.head(), 16 );
if ( !buffer.size() )
return( -1 );
if(mCodecContext->codec_id == AV_CODEC_ID_H264) {
// SPS and PPS frames should be saved and appended to IDR frames
int nalType = (buffer.head()[3] & 0x1f);
// SPS
if(nalType == 7) {
lastSps = buffer;
continue;
}
// PPS
else if(nalType == 8) {
lastPps = buffer;
continue;
}
// IDR
else if(nalType == 5) {
buffer += lastSps;
buffer += lastPps;
}
} // end if H264, what about other codecs?
av_init_packet( &packet );
// Why are we checking for it being the video stream
if ( packet.stream_index == mVideoStreamId ) {
while ( !frameComplete && buffer.size() > 0 ) {
packet.data = buffer.head();
packet.size = buffer.size();
// So this does the decode
#if LIBAVCODEC_VERSION_CHECK(52, 23, 0, 23, 0)
int len = avcodec_decode_video2( mCodecContext, mRawFrame, &frameComplete, &packet );
#else
int len = avcodec_decode_video( mCodecContext, mRawFrame, &frameComplete, packet.data, packet.size );
#endif
if ( len < 0 ) {
Error( "Error while decoding frame %d", frameCount );
Hexdump( Logger::ERROR, buffer.head(), buffer.size()>256?256:buffer.size() );
buffer.clear();
continue;
}
Debug( 2, "Frame: %d - %d/%d", frameCount, len, buffer.size() );
//if ( buffer.size() < 400 )
//Hexdump( 0, buffer.head(), buffer.size() );
buffer -= len;
} // end while get & decode a frame
if ( frameComplete ) {
Debug( 3, "Got frame %d", frameCount );
#if LIBAVUTIL_VERSION_CHECK(54, 6, 0, 6, 0)
av_image_fill_arrays(mFrame->data, mFrame->linesize,
directbuffer, imagePixFormat, width, height, 1);
#else
avpicture_fill( (AVPicture *)mFrame, directbuffer,
imagePixFormat, width, height);
#endif
//Video recording
if ( recording && !wasRecording ) {
//Instantiate the video storage module
videoStore = new VideoStore((const char *)event_file, "mp4",
mFormatContext->streams[mVideoStreamId],
mAudioStreamId==-1?NULL:mFormatContext->streams[mAudioStreamId],
startTime,
this->getMonitor()->getOrientation() );
wasRecording = true;
strcpy(oldDirectory, event_file);
} else if ( !recording && wasRecording && videoStore ) {
// Why are we deleting the videostore? Becase for soem reason we are no longer recording? How does that happen?
Info("Deleting videoStore instance");
delete videoStore;
videoStore = NULL;
}
//The directory we are recording to is no longer tied to the current event. Need to re-init the videostore with the correct directory and start recording again
if ( recording && wasRecording && (strcmp(oldDirectory, event_file)!=0) && (packet.flags & AV_PKT_FLAG_KEY) ) {
//don't open new videostore until we're on a key frame..would this require an offset adjustment for the event as a result?...if we store our key frame location with the event will that be enough?
Info("Re-starting video storage module");
if ( videoStore ) {
delete videoStore;
videoStore = NULL;
}
videoStore = new VideoStore((const char *)event_file, "mp4",
mFormatContext->streams[mVideoStreamId],
mAudioStreamId==-1?NULL:mFormatContext->streams[mAudioStreamId],
startTime,
this->getMonitor()->getOrientation() );
strcpy( oldDirectory, event_file );
}
if ( videoStore && recording ) {
//Write the packet to our video store
int ret = videoStore->writeVideoFramePacket(&packet, mFormatContext->streams[mVideoStreamId]);//, &lastKeyframePkt);
if ( ret < 0 ) {//Less than zero and we skipped a frame
av_free_packet( &packet );
return 0;
}
}
#if HAVE_LIBSWSCALE #if HAVE_LIBSWSCALE
if(mConvertContext == NULL) { if(mConvertContext == NULL) {
mConvertContext = sws_getContext( mCodecContext->width, mCodecContext->height, mCodecContext->pix_fmt, width, height, imagePixFormat, SWS_BICUBIC, NULL, NULL, NULL ); mConvertContext = sws_getContext( mCodecContext->width, mCodecContext->height, mCodecContext->pix_fmt, width, height, imagePixFormat, SWS_BICUBIC, NULL, NULL, NULL );
if(mConvertContext == NULL) if(mConvertContext == NULL)
Fatal( "Unable to create conversion context"); Fatal( "Unable to create conversion context");
} }
if ( sws_scale( mConvertContext, mRawFrame->data, mRawFrame->linesize, 0, mCodecContext->height, mFrame->data, mFrame->linesize ) < 0 ) if ( sws_scale( mConvertContext, mRawFrame->data, mRawFrame->linesize, 0, mCodecContext->height, mFrame->data, mFrame->linesize ) < 0 )
Fatal( "Unable to convert raw format %u to target format %u at frame %d", mCodecContext->pix_fmt, imagePixFormat, frameCount ); Fatal( "Unable to convert raw format %u to target format %u at frame %d", mCodecContext->pix_fmt, imagePixFormat, frameCount );
#else // HAVE_LIBSWSCALE #else // HAVE_LIBSWSCALE
Fatal( "You must compile ffmpeg with the --enable-swscale option to use RTSP cameras" ); Fatal( "You must compile ffmpeg with the --enable-swscale option to use RTSP cameras" );
#endif // HAVE_LIBSWSCALE #endif // HAVE_LIBSWSCALE
frameCount++;
} /* frame complete */ frameCount++;
} /* frame complete */
} else if ( packet.stream_index == mAudioStreamId ) {
Debug( 4, "Got audio packet" );
if ( videoStore && recording ) {
if ( record_audio ) {
Debug( 4, "Storing Audio packet" );
//Write the packet to our video store
int ret = videoStore->writeAudioFramePacket(&packet, mFormatContext->streams[packet.stream_index]); //FIXME no relevance of last key frame
if ( ret < 0 ) { //Less than zero and we skipped a frame
#if LIBAVCODEC_VERSION_CHECK(57, 8, 0, 12, 100)
av_packet_unref( &packet );
#else
av_free_packet( &packet );
#endif
return 0;
}
}
}
} // end if video or audio packet
#if LIBAVCODEC_VERSION_CHECK(57, 8, 0, 12, 100) #if LIBAVCODEC_VERSION_CHECK(57, 8, 0, 12, 100)
av_packet_unref( &packet); av_packet_unref( &packet );
#else #else
av_free_packet( &packet ); av_free_packet( &packet );
#endif #endif
@ -371,9 +554,9 @@ int RemoteCameraRtsp::Capture( Image &image )
if(frameComplete) if(frameComplete)
return (0); return (0);
} } // end while true
return (0) ; return (0) ;
} } // int RemoteCameraRtsp::CaptureAndRecord( Image &image, bool recording, char* event_file )
int RemoteCameraRtsp::PostCapture() int RemoteCameraRtsp::PostCapture()
{ {

View File

@ -26,6 +26,7 @@
#include "zm_utils.h" #include "zm_utils.h"
#include "zm_rtsp.h" #include "zm_rtsp.h"
#include "zm_ffmpeg.h" #include "zm_ffmpeg.h"
#include "zm_videostore.h"
// //
// Class representing 'rtsp' cameras, i.e. those which are // Class representing 'rtsp' cameras, i.e. those which are
@ -55,19 +56,24 @@ protected:
#if HAVE_LIBAVFORMAT #if HAVE_LIBAVFORMAT
AVFormatContext *mFormatContext; AVFormatContext *mFormatContext;
int mVideoStreamId; int mVideoStreamId;
int mAudioStreamId;
AVCodecContext *mCodecContext; AVCodecContext *mCodecContext;
AVCodec *mCodec; AVCodec *mCodec;
AVFrame *mRawFrame; AVFrame *mRawFrame;
AVFrame *mFrame; AVFrame *mFrame;
_AVPIXELFORMAT imagePixFormat; _AVPIXELFORMAT imagePixFormat;
#endif // HAVE_LIBAVFORMAT #endif // HAVE_LIBAVFORMAT
bool wasRecording;
VideoStore *videoStore;
char oldDirectory[4096];
int64_t startTime;
#if HAVE_LIBSWSCALE #if HAVE_LIBSWSCALE
struct SwsContext *mConvertContext; struct SwsContext *mConvertContext;
#endif #endif
public: public:
RemoteCameraRtsp( int p_id, const std::string &method, const std::string &host, const std::string &port, const std::string &path, int p_width, int p_height, bool p_rtsp_describe, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture ); RemoteCameraRtsp( unsigned int p_monitor_id, const std::string &method, const std::string &host, const std::string &port, const std::string &path, int p_width, int p_height, bool p_rtsp_describe, int p_colours, int p_brightness, int p_contrast, int p_hue, int p_colour, bool p_capture, bool p_record_audio );
~RemoteCameraRtsp(); ~RemoteCameraRtsp();
void Initialise(); void Initialise();
@ -79,7 +85,7 @@ public:
int PreCapture(); int PreCapture();
int Capture( Image &image ); int Capture( Image &image );
int PostCapture(); int PostCapture();
int CaptureAndRecord( Image &image, bool recording, char* event_directory );
}; };
#endif // ZM_REMOTE_CAMERA_RTSP_H #endif // ZM_REMOTE_CAMERA_RTSP_H

View File

@ -262,6 +262,11 @@ bool RtpSource::handlePacket( const unsigned char *packet, size_t packetLen )
int rtpHeaderSize = 12 + rtpHeader->cc * 4; int rtpHeaderSize = 12 + rtpHeader->cc * 4;
// No need to check for nal type as non fragmented packets already have 001 start sequence appended // No need to check for nal type as non fragmented packets already have 001 start sequence appended
bool h264FragmentEnd = (mCodecId == AV_CODEC_ID_H264) && (packet[rtpHeaderSize+1] & 0x40); bool h264FragmentEnd = (mCodecId == AV_CODEC_ID_H264) && (packet[rtpHeaderSize+1] & 0x40);
// M stands for Marker, it is the 8th bit
// The interpretation of the marker is defined by a profile. It is intended
// to allow significant events such as frame boundaries to be marked in the
// packet stream. A profile may define additional marker bits or specify
// that there is no marker bit by changing the number of bits in the payload type field.
bool thisM = rtpHeader->m || h264FragmentEnd; bool thisM = rtpHeader->m || h264FragmentEnd;
if ( updateSeq( ntohs(rtpHeader->seqN) ) ) if ( updateSeq( ntohs(rtpHeader->seqN) ) )
@ -275,15 +280,18 @@ bool RtpSource::handlePacket( const unsigned char *packet, size_t packetLen )
if( mCodecId == AV_CODEC_ID_H264 ) if( mCodecId == AV_CODEC_ID_H264 )
{ {
int nalType = (packet[rtpHeaderSize] & 0x1f); int nalType = (packet[rtpHeaderSize] & 0x1f);
Debug( 3, "Have H264 frame: nal type is %d", nalType );
switch (nalType) switch (nalType)
{ {
case 24: case 24: // STAP-A
{ {
extraHeader = 2; extraHeader = 2;
break; break;
} }
case 25: case 26: case 27: case 25: // STAP-B
case 26: // MTAP-16
case 27: // MTAP-24
{ {
extraHeader = 3; extraHeader = 3;
break; break;
@ -304,6 +312,9 @@ bool RtpSource::handlePacket( const unsigned char *packet, size_t packetLen )
extraHeader = 2; extraHeader = 2;
break; break;
} }
default: {
Debug(3, "Unhandled nalType %d", nalType );
}
} }
// Append NAL frame start code // Append NAL frame start code
@ -311,6 +322,8 @@ bool RtpSource::handlePacket( const unsigned char *packet, size_t packetLen )
mFrame.append( "\x0\x0\x1", 3 ); mFrame.append( "\x0\x0\x1", 3 );
} }
mFrame.append( packet+rtpHeaderSize+extraHeader, packetLen-rtpHeaderSize-extraHeader ); mFrame.append( packet+rtpHeaderSize+extraHeader, packetLen-rtpHeaderSize-extraHeader );
} else {
Debug( 3, "NOT H264 frame: type is %d", mCodecId );
} }
Hexdump( 4, mFrame.head(), 16 ); Hexdump( 4, mFrame.head(), 16 );

518
src/zm_video.cpp Normal file
View File

@ -0,0 +1,518 @@
// This program is free software; you can redistribute it and/or
// modify it under the terms of the GNU General Public License
// as published by the Free Software Foundation; either version 2
// of the License, or (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
//
#include "zm.h"
#include "zm_video.h"
#include "zm_image.h"
#include "zm_utils.h"
#include "zm_rgb.h"
#include <sstream>
VideoWriter::VideoWriter(const char* p_container, const char* p_codec, const char* p_path, const unsigned int p_width, const unsigned int p_height, const unsigned int p_colours, const unsigned int p_subpixelorder) :
container(p_container), codec(p_codec), path(p_path), width(p_width), height(p_height), colours(p_colours), subpixelorder(p_subpixelorder), frame_count(0) {
Debug(7,"Video object created");
/* Parameter checking */
if(path.empty()) {
Error("Invalid file path");
}
if(!width || !height) {
Error("Invalid width or height");
}
}
VideoWriter::~VideoWriter() {
Debug(7,"Video object destroyed");
}
int VideoWriter::Reset(const char* new_path) {
/* Common variables reset */
/* If there is a new path, use it */
if(new_path != NULL) {
path = new_path;
}
/* Reset frame counter */
frame_count = 0;
return 0;
}
#if ZM_HAVE_VIDEOWRITER_X264MP4
X264MP4Writer::X264MP4Writer(const char* p_path, const unsigned int p_width, const unsigned int p_height, const unsigned int p_colours, const unsigned int p_subpixelorder, const std::vector<EncoderParameter_t>* p_user_params) : VideoWriter("mp4", "h264", p_path, p_width, p_height, p_colours, p_subpixelorder), bOpen(false), bGotH264AVCInfo(false), bFirstFrame(true) {
/* Initialize ffmpeg if it hasn't been initialized yet */
FFMPEGInit();
/* Initialize swscale */
zm_pf = GetFFMPEGPixelFormat(colours,subpixelorder);
if(zm_pf == 0) {
Error("Unable to match ffmpeg pixelformat");
}
codec_pf = AV_PIX_FMT_YUV420P;
swscaleobj.SetDefaults(zm_pf, codec_pf, width, height);
/* Calculate the image sizes. We will need this for parameter checking */
zm_imgsize = colours * width * height;
codec_imgsize = avpicture_get_size( codec_pf, width, height);
if(!codec_imgsize) {
Error("Failed calculating codec pixel format image size");
}
/* If supplied with user parameters to the encoder, copy them */
if(p_user_params != NULL) {
user_params = *p_user_params;
}
/* Setup x264 parameters */
if(x264config() < 0) {
Error("Failed setting x264 parameters");
}
/* Allocate x264 input picture */
x264_picture_alloc(&x264picin, X264_CSP_I420, x264params.i_width, x264params.i_height);
}
X264MP4Writer::~X264MP4Writer() {
/* Free x264 input picture */
x264_picture_clean(&x264picin);
if(bOpen)
Close();
}
int X264MP4Writer::Open() {
/* Open the encoder */
x264enc = x264_encoder_open(&x264params);
if(x264enc == NULL) {
Error("Failed opening x264 encoder");
return -1;
}
// Debug(4,"x264 maximum delayed frames: %d",x264_encoder_maximum_delayed_frames(x264enc));
x264_nal_t* nals;
int i_nals;
if(!x264_encoder_headers(x264enc,&nals,&i_nals)) {
Error("Failed getting encoder headers");
return -2;
}
/* Search SPS NAL for AVC information */
for(int i=0;i<i_nals;i++) {
if(nals[i].i_type == NAL_SPS) {
x264_profleindication = nals[i].p_payload[5];
x264_profilecompat = nals[i].p_payload[6];
x264_levelindication = nals[i].p_payload[7];
bGotH264AVCInfo = true;
break;
}
}
if(!bGotH264AVCInfo) {
Warning("Missing AVC information");
}
/* Create the file */
mp4h = MP4Create((path + ".incomplete").c_str());
if(mp4h == MP4_INVALID_FILE_HANDLE) {
Error("Failed creating mp4 file: %s",path.c_str());
return -10;
}
/* Set the global timescale */
if(!MP4SetTimeScale(mp4h, 1000)) {
Error("Failed setting timescale");
return -11;
}
/* Set the global video profile */
/* I am a bit confused about this one.
I couldn't find what the value should be
Some use 0x15 while others use 0x7f. */
MP4SetVideoProfileLevel(mp4h, 0x7f);
/* Add H264 video track */
mp4vtid = MP4AddH264VideoTrack(mp4h,1000,MP4_INVALID_DURATION,width,height,x264_profleindication,x264_profilecompat,x264_levelindication,3);
if(mp4vtid == MP4_INVALID_TRACK_ID) {
Error("Failed adding H264 video track");
return -12;
}
bOpen = true;
return 0;
}
int X264MP4Writer::Close() {
/* Flush all pending frames */
for(int i = (x264_encoder_delayed_frames(x264enc) + 1); i > 0; i-- ) {
x264encodeloop(true);
}
/* Close the encoder */
x264_encoder_close(x264enc);
/* Close MP4 handle */
MP4Close(mp4h);
/* Required for proper HTTP streaming */
MP4Optimize((path + ".incomplete").c_str(), path.c_str());
/* Delete the temporary file */
unlink((path + ".incomplete").c_str());
bOpen = false;
Debug(7, "Video closed. Total frames: %d", frame_count);
return 0;
}
int X264MP4Writer::Reset(const char* new_path) {
/* Close the encoder and file */
if(bOpen)
Close();
/* Reset common variables */
VideoWriter::Reset(new_path);
/* Reset local variables */
bFirstFrame = true;
bGotH264AVCInfo = false;
prevnals.clear();
prevpayload.clear();
/* Reset x264 parameters */
x264config();
/* Open the encoder */
Open();
return 0;
}
int X264MP4Writer::Encode(const uint8_t* data, const size_t data_size, const unsigned int frame_time) {
/* Parameter checking */
if(data == NULL) {
Error("NULL buffer");
return -1;
}
if(data_size != zm_imgsize) {
Error("The data buffer size does not match the expected size. Expected: %d Current: %d", zm_imgsize, data_size);
return -2;
}
if(!bOpen) {
Warning("The encoder was not initialized, initializing now");
Open();
}
/* Convert the image into the x264 input picture */
if(swscaleobj.ConvertDefaults(data, data_size, x264picin.img.plane[0], codec_imgsize) < 0) {
Error("Image conversion failed");
return -3;
}
/* Set PTS */
x264picin.i_pts = frame_time;
/* Do the encoding */
x264encodeloop();
/* Increment frame counter */
frame_count++;
return 0;
}
int X264MP4Writer::Encode(const Image* img, const unsigned int frame_time) {
if(img->Width() != width) {
Error("Source image width differs. Source: %d Output: %d",img->Width(), width);
return -12;
}
if(img->Height() != height) {
Error("Source image height differs. Source: %d Output: %d",img->Height(), height);
return -13;
}
return Encode(img->Buffer(),img->Size(),frame_time);
}
int X264MP4Writer::x264config() {
/* Sets up the encoder configuration */
int x264ret;
/* Defaults */
const char* preset = "veryfast";
const char* tune = "stillimage";
const char* profile = "main";
/* Search the user parameters for preset, tune and profile */
for(unsigned int i=0; i < user_params.size(); i++) {
if(strcmp(user_params[i].pname, "preset") == 0) {
/* Got preset */
preset = user_params[i].pvalue;
} else if(strcmp(user_params[i].pname, "tune") == 0) {
/* Got tune */
tune = user_params[i].pvalue;
} else if(strcmp(user_params[i].pname, "profile") == 0) {
/* Got profile */
profile = user_params[i].pvalue;
}
}
/* Set the defaults and preset and tune */
x264ret = x264_param_default_preset(&x264params, preset, tune);
if(x264ret != 0) {
Error("Failed setting x264 preset %s and tune %s : %d",preset,tune,x264ret);
}
/* Set the profile */
x264ret = x264_param_apply_profile(&x264params, profile);
if(x264ret != 0) {
Error("Failed setting x264 profile %s : %d",profile,x264ret);
}
/* Input format */
x264params.i_width = width;
x264params.i_height = height;
x264params.i_csp = X264_CSP_I420;
/* Quality control */
x264params.rc.i_rc_method = X264_RC_CRF;
x264params.rc.f_rf_constant = 23.0;
/* Enable b-frames */
x264params.i_bframe = 16;
x264params.i_bframe_adaptive = 1;
/* Timebase */
x264params.i_timebase_num = 1;
x264params.i_timebase_den = 1000;
/* Enable variable frame rate */
x264params.b_vfr_input = 1;
/* Disable annex-b (start codes) */
x264params.b_annexb = 0;
/* TODO: Setup error handler */
//x264params.i_log_level = X264_LOG_DEBUG;
/* Process user parameters (excluding preset, tune and profile) */
for(unsigned int i=0; i < user_params.size(); i++) {
/* Skip preset, tune and profile */
if( (strcmp(user_params[i].pname, "preset") == 0) || (strcmp(user_params[i].pname, "tune") == 0) || (strcmp(user_params[i].pname, "profile") == 0) ) {
continue;
}
/* Pass the name and value to x264 */
x264ret = x264_param_parse(&x264params, user_params[i].pname, user_params[i].pvalue);
/* Error checking */
if(x264ret != 0) {
if(x264ret == X264_PARAM_BAD_NAME) {
Error("Failed processing x264 user parameter %s=%s : Bad name", user_params[i].pname, user_params[i].pvalue);
} else if(x264ret == X264_PARAM_BAD_VALUE) {
Error("Failed processing x264 user parameter %s=%s : Bad value", user_params[i].pname, user_params[i].pvalue);
} else {
Error("Failed processing x264 user parameter %s=%s : Unknown error (%d)", user_params[i].pname, user_params[i].pvalue, x264ret);
}
}
}
return 0;
}
void X264MP4Writer::x264encodeloop(bool bFlush) {
x264_nal_t* nals;
int i_nals;
int frame_size;
if(bFlush) {
frame_size = x264_encoder_encode(x264enc, &nals, &i_nals, NULL, &x264picout);
} else {
frame_size = x264_encoder_encode(x264enc, &nals, &i_nals, &x264picin, &x264picout);
}
if (frame_size > 0 || bFlush) {
Debug(8, "x264 Frame: %d PTS: %d DTS: %d Size: %d\n",frame_count, x264picout.i_pts, x264picout.i_dts, frame_size);
/* Handle the previous frame */
if(!bFirstFrame) {
buffer.clear();
/* Process the NALs for the previous frame */
for(unsigned int i=0; i < prevnals.size(); i++) {
Debug(9,"Processing NAL: Type %d Size %d",prevnals[i].i_type,prevnals[i].i_payload);
switch(prevnals[i].i_type) {
case NAL_PPS:
/* PPS NAL */
MP4AddH264PictureParameterSet(mp4h, mp4vtid, prevnals[i].p_payload+4, prevnals[i].i_payload-4);
break;
case NAL_SPS:
/* SPS NAL */
MP4AddH264SequenceParameterSet(mp4h, mp4vtid, prevnals[i].p_payload+4, prevnals[i].i_payload-4);
break;
default:
/* Anything else, hopefully frames, so copy it into the sample */
buffer.append(prevnals[i].p_payload, prevnals[i].i_payload);
}
}
/* Calculate frame duration and offset */
int duration = x264picout.i_dts - prevDTS;
int offset = prevPTS - prevDTS;
/* Write the sample */
if(!buffer.empty()) {
if(!MP4WriteSample(mp4h, mp4vtid, buffer.extract(buffer.size()), buffer.size(), duration, offset, prevKeyframe)) {
Error("Failed writing sample");
}
}
/* Cleanup */
prevnals.clear();
prevpayload.clear();
}
/* Got a frame. Copy this new frame into the previous frame */
if(frame_size > 0) {
/* Copy the NALs and the payloads */
for(int i=0;i<i_nals;i++) {
prevnals.push_back(nals[i]);
prevpayload.append(nals[i].p_payload, nals[i].i_payload);
}
/* Update the payload pointers */
/* This is done in a separate loop because the previous loop might reallocate memory when appending,
making the pointers invalid */
unsigned int payload_offset = 0;
for(unsigned int i=0;i<prevnals.size();i++) {
prevnals[i].p_payload = prevpayload.head() + payload_offset;
payload_offset += nals[i].i_payload;
}
/* We need this for the next frame */
prevPTS = x264picout.i_pts;
prevDTS = x264picout.i_dts;
prevKeyframe = x264picout.b_keyframe;
bFirstFrame = false;
}
} else if(frame_size == 0) {
Debug(7,"x264 encode returned zero. Delayed frames: %d",x264_encoder_delayed_frames(x264enc));
} else {
Error("x264 encode failed: %d",frame_size);
}
}
#endif // ZM_VIDEOWRITER_X264MP4
int ParseEncoderParameters(const char* str, std::vector<EncoderParameter_t>* vec) {
if(vec == NULL) {
Error("NULL Encoder parameters vector pointer");
return -1;
}
if(str == NULL) {
Error("NULL Encoder parameters string");
return -2;
}
vec->clear();
if(str[0] == 0) {
/* Empty */
return 0;
}
std::string line;
std::stringstream ss(str);
size_t valueoffset;
size_t valuelen;
unsigned int lineno = 0;
EncoderParameter_t param;
while(std::getline(ss, line) ) {
lineno++;
/* Remove CR if exists */
if(line.length() >= 1 && line[line.length()-1] == '\r') {
line.erase(line.length()-1);
}
/* Skip comments and empty lines */
if(line.empty() || line[0] == '#') {
continue;
}
valueoffset = line.find('=');
if(valueoffset == std::string::npos || valueoffset+1 >= line.length() || valueoffset == 0) {
Warning("Failed parsing encoder parameters line %d: Invalid pair", lineno);
continue;
}
if(valueoffset > (sizeof(param.pname)-1) ) {
Warning("Failed parsing encoder parameters line %d: Name too long", lineno);
continue;
}
valuelen = line.length() - (valueoffset+1);
if( valuelen > (sizeof(param.pvalue)-1) ) {
Warning("Failed parsing encoder parameters line %d: Value too long", lineno);
continue;
}
/* Copy and NULL terminate */
line.copy(param.pname, valueoffset, 0);
line.copy(param.pvalue, valuelen, valueoffset+1);
param.pname[valueoffset] = 0;
param.pvalue[valuelen] = 0;
/* Push to the vector */
vec->push_back(param);
Debug(7, "Parsed encoder parameter: %s = %s", param.pname, param.pvalue);
}
Debug(7, "Parsed %d lines", lineno);
return 0;
}

173
src/zm_video.h Normal file
View File

@ -0,0 +1,173 @@
// This program is free software; you can redistribute it and/or
// modify it under the terms of the GNU General Public License
// as published by the Free Software Foundation; either version 2
// of the License, or (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
//
#ifndef ZM_VIDEO_H
#define ZM_VIDEO_H
#include "zm.h"
#include "zm_rgb.h"
#include "zm_utils.h"
#include "zm_ffmpeg.h"
#include "zm_buffer.h"
/*
#define HAVE_LIBX264 1
#define HAVE_LIBMP4V2 1
#define HAVE_X264_H 1
#define HAVE_MP4_H 1
*/
#if HAVE_MP4V2_MP4V2_H
#include <mp4v2/mp4v2.h>
#endif
#if HAVE_MP4V2_H
#include <mp4v2.h>
#endif
#if HAVE_MP4_H
#include <mp4.h>
#endif
#if HAVE_X264_H
#ifdef __cplusplus
extern "C" {
#endif
#include <x264.h>
#ifdef __cplusplus
}
#endif
#endif
/* Structure for user parameters to the encoder */
struct EncoderParameter_t {
char pname[48];
char pvalue[48];
};
int ParseEncoderParameters(const char* str, std::vector<EncoderParameter_t>* vec);
/* VideoWriter is a generic interface that ZM uses to save events as videos */
/* It is relatively simple and the functions are pure virtual, so they must be implemented by the deriving class */
class VideoWriter {
protected:
std::string container;
std::string codec;
std::string path;
unsigned int width;
unsigned int height;
unsigned int colours;
unsigned int subpixelorder;
unsigned int frame_count;
public:
VideoWriter(const char* p_container, const char* p_codec, const char* p_path, const unsigned int p_width, const unsigned int p_height, const unsigned int p_colours, const unsigned int p_subpixelorder);
virtual ~VideoWriter();
virtual int Encode(const uint8_t* data, const size_t data_size, const unsigned int frame_time) = 0;
virtual int Encode(const Image* img, const unsigned int frame_time) = 0;
virtual int Open() = 0;
virtual int Close() = 0;
virtual int Reset(const char* new_path = NULL);
const char* GetContainer() const {
return container.c_str();
}
const char* GetCodec() const {
return codec.c_str();
}
const char* GetPath() const {
return path.c_str();
}
unsigned int GetWidth() const {
return width;
}
unsigned int GetHeight() const {
return height;
}
unsigned int GetColours() const {
return colours;
}
unsigned int GetSubpixelorder () const {
return subpixelorder;
}
unsigned int GetFrameCount() const {
return frame_count;
}
};
#if HAVE_LIBX264 && HAVE_LIBMP4V2 && HAVE_LIBAVUTIL && HAVE_LIBSWSCALE
#define ZM_HAVE_VIDEOWRITER_X264MP4 1
class X264MP4Writer : public VideoWriter {
protected:
bool bOpen;
bool bGotH264AVCInfo;
bool bFirstFrame;
/* SWScale */
SWScale swscaleobj;
enum _AVPIXELFORMAT zm_pf;
enum _AVPIXELFORMAT codec_pf;
size_t codec_imgsize;
size_t zm_imgsize;
/* User parameters */
std::vector<EncoderParameter_t> user_params;
/* AVC Information */
uint8_t x264_profleindication;
uint8_t x264_profilecompat;
uint8_t x264_levelindication;
/* NALs */
Buffer buffer;
/* Previous frame */
int prevPTS;
int prevDTS;
bool prevKeyframe;
Buffer prevpayload;
std::vector<x264_nal_t> prevnals;
/* Internal functions */
int x264config();
void x264encodeloop(bool bFlush = false);
/* x264 objects */
x264_t* x264enc;
x264_param_t x264params;
x264_picture_t x264picin;
x264_picture_t x264picout;
/* MP4v2 objects */
MP4FileHandle mp4h;
MP4TrackId mp4vtid;
public:
X264MP4Writer(const char* p_path, const unsigned int p_width, const unsigned int p_height, const unsigned int p_colours, const unsigned int p_subpixelorder, const std::vector<EncoderParameter_t>* p_user_params = NULL);
~X264MP4Writer();
int Encode(const uint8_t* data, const size_t data_size, const unsigned int frame_time);
int Encode(const Image* img, const unsigned int frame_time);
int Open();
int Close();
int Reset(const char* new_path = NULL);
};
#endif // HAVE_LIBX264 && HAVE_LIBMP4V2 && HAVE_LIBAVUTIL && HAVE_LIBSWSCALE
#endif // ZM_VIDEO_H

392
src/zm_videostore.cpp Normal file
View File

@ -0,0 +1,392 @@
//
// ZoneMinder Video Storage Implementation
// Written by Chris Wiggins
// http://chriswiggins.co.nz
// Modification by Steve Gilvarry
//
// This program is free software; you can redistribute it and/or
// modify it under the terms of the GNU General Public License
// as published by the Free Software Foundation; either version 2
// of the License, or (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
//
#define __STDC_FORMAT_MACROS 1
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>
#include "zm.h"
#include "zm_videostore.h"
extern "C"{
#include "libavutil/time.h"
}
VideoStore::VideoStore(const char *filename_in, const char *format_in,
AVStream *input_st,
AVStream *inpaud_st,
int64_t nStartTime,
Monitor::Orientation orientation
) {
AVDictionary *pmetadata = NULL;
int dsr;
//store inputs in variables local to class
filename = filename_in;
format = format_in;
keyframeMessage = false;
keyframeSkipNumber = 0;
Info("Opening video storage stream %s format: %d\n", filename, format);
//Init everything we need
int ret;
av_register_all();
ret = avformat_alloc_output_context2(&oc, NULL, NULL, filename);
if ( ret < 0 ) {
Warning("Could not create video storage stream %s as no output context"
" could be assigned based on filename: %s",
filename,
av_make_error_string(ret).c_str()
);
}
//Couldn't deduce format from filename, trying from format name
if (!oc) {
avformat_alloc_output_context2(&oc, NULL, format, filename);
if (!oc) {
Fatal("Could not create video storage stream %s as no output context"
" could not be assigned based on filename or format %s",
filename, format);
}
}
dsr = av_dict_set(&pmetadata, "title", "Zoneminder Security Recording", 0);
if (dsr < 0) Warning("%s:%d: title set failed", __FILE__, __LINE__ );
oc->metadata = pmetadata;
fmt = oc->oformat;
video_st = avformat_new_stream(oc, (AVCodec *)input_st->codec->codec);
if (!video_st) {
Fatal("Unable to create video out stream\n");
}
ret = avcodec_copy_context(video_st->codec, input_st->codec);
if (ret < 0) {
Fatal("Unable to copy input video context to output video context "
"%s\n", av_make_error_string(ret).c_str());
}
if ( video_st->sample_aspect_ratio.den != video_st->codec->sample_aspect_ratio.den ) {
Warning("Fixingample_aspect_ratio.den");
video_st->sample_aspect_ratio.den = video_st->codec->sample_aspect_ratio.den;
}
if ( video_st->sample_aspect_ratio.num != input_st->codec->sample_aspect_ratio.num ) {
Warning("Fixingample_aspect_ratio.num");
video_st->sample_aspect_ratio.num = input_st->codec->sample_aspect_ratio.num;
}
if ( video_st->codec->codec_id != input_st->codec->codec_id ) {
Warning("Fixing video_st->codec->codec_id");
video_st->codec->codec_id = input_st->codec->codec_id;
}
if ( ! video_st->codec->time_base.num ) {
Warning("video_st->codec->time_base.num is not set%d/%d. Fixing by setting it to 1", video_st->codec->time_base.num, video_st->codec->time_base.den);
Warning("video_st->codec->time_base.num is not set%d/%d. Fixing by setting it to 1", video_st->time_base.num, video_st->time_base.den);
video_st->codec->time_base.num = video_st->time_base.num;
video_st->codec->time_base.den = video_st->time_base.den;
}
video_st->codec->codec_tag = 0;
if (oc->oformat->flags & AVFMT_GLOBALHEADER) {
video_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
if ( orientation ) {
if ( orientation == Monitor::ROTATE_0 ) {
} else if ( orientation == Monitor::ROTATE_90 ) {
dsr = av_dict_set( &video_st->metadata, "rotate", "90", 0);
if (dsr < 0) Warning("%s:%d: title set failed", __FILE__, __LINE__ );
} else if ( orientation == Monitor::ROTATE_180 ) {
dsr = av_dict_set( &video_st->metadata, "rotate", "180", 0);
if (dsr < 0) Warning("%s:%d: title set failed", __FILE__, __LINE__ );
} else if ( orientation == Monitor::ROTATE_270 ) {
dsr = av_dict_set( &video_st->metadata, "rotate", "270", 0);
if (dsr < 0) Warning("%s:%d: title set failed", __FILE__, __LINE__ );
} else {
Warning( "Unsupported Orientation(%d)", orientation );
}
}
if (inpaud_st) {
audio_st = avformat_new_stream(oc, inpaud_st->codec->codec);
if (!audio_st) {
Error("Unable to create audio out stream\n");
audio_st = NULL;
} else {
ret = avcodec_copy_context(audio_st->codec, inpaud_st->codec);
if (ret < 0) {
Fatal("Unable to copy audio context %s\n", av_make_error_string(ret).c_str());
}
audio_st->codec->codec_tag = 0;
if (oc->oformat->flags & AVFMT_GLOBALHEADER) {
audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
}
} else {
Debug(3, "No Audio output stream");
audio_st = NULL;
}
/* open the output file, if needed */
if (!(fmt->flags & AVFMT_NOFILE)) {
ret = avio_open2(&oc->pb, filename, AVIO_FLAG_WRITE,NULL,NULL);
if (ret < 0) {
Fatal("Could not open output file '%s': %s\n", filename,
av_make_error_string(ret).c_str());
}
}
//av_dict_set(&opts, "movflags", "frag_custom+dash+delay_moov", 0);
//if ((ret = avformat_write_header(ctx, &opts)) < 0) {
//}
//os->ctx_inited = 1;
//avio_flush(ctx->pb);
//av_dict_free(&opts);
/* Write the stream header, if any. */
ret = avformat_write_header(oc, NULL);
if (ret < 0) {
zm_dump_stream_format( oc, 0, 0, 1 );
Fatal("Error occurred when writing output file header to %s: %s\n",
filename,
av_make_error_string(ret).c_str());
}
prevDts = 0;
startPts = 0;
startDts = 0;
filter_in_rescale_delta_last = AV_NOPTS_VALUE;
startTime=av_gettime()-nStartTime;//oc->start_time;
Info("VideoStore startTime=%d\n",startTime);
} // VideoStore::VideoStore
VideoStore::~VideoStore(){
/* Write the trailer before close */
if ( int rc = av_write_trailer(oc) ) {
Error("Error writing trailer %s", av_err2str( rc ) );
} else {
Debug(3, "Sucess Writing trailer");
}
// I wonder if we should be closing the file first.
// I also wonder if we really need to be doing all the context allocation/de-allocation constantly, or whether we can just re-use it. Just do a file open/close/writeheader/etc.
// What if we were only doing audio recording?
if ( video_st ) {
avcodec_close(video_st->codec);
}
if (audio_st) {
avcodec_close(audio_st->codec);
}
// WHen will be not using a file ?
if (!(fmt->flags & AVFMT_NOFILE)) {
/* Close the output file. */
if ( int rc = avio_close(oc->pb) ) {
Error("Error closing avio %s", av_err2str( rc ) );
}
} else {
Debug(3, "Not closing avio because we are not writing to a file.");
}
/* free the stream */
avformat_free_context(oc);
}
void VideoStore::dumpPacket( AVPacket *pkt ){
char b[10240];
snprintf(b, sizeof(b), " pts: %" PRId64 ", dts: %" PRId64 ", data: %p, size: %d, sindex: %d, dflags: %04x, s-pos: %" PRId64 ", c-duration: %" PRId64 "\n"
, pkt->pts
, pkt->dts
, pkt->data
, pkt->size
, pkt->stream_index
, pkt->flags
, pkt->pos
, pkt->convergence_duration
);
Info("%s:%d:DEBUG: %s", __FILE__, __LINE__, b);
}
int VideoStore::writeVideoFramePacket(AVPacket *ipkt, AVStream *input_st){//, AVPacket *lastKeyframePkt){
//Debug(3, "before ost_tbcket %d", startTime );
//zm_dump_stream_format( oc, ipkt->stream_index, 0, 1 );
//Debug(3, "before ost_tbcket %d", startTime );
int64_t ost_tb_start_time = av_rescale_q(startTime, AV_TIME_BASE_Q, video_st->time_base);
AVPacket opkt, safepkt;
AVPicture pict;
av_init_packet(&opkt);
//Scale the PTS of the outgoing packet to be the correct time base
if (ipkt->pts != AV_NOPTS_VALUE) {
opkt.pts = av_rescale_q(ipkt->pts-startPts, input_st->time_base, video_st->time_base) - ost_tb_start_time;
} else {
opkt.pts = AV_NOPTS_VALUE;
}
//Scale the DTS of the outgoing packet to be the correct time base
if(ipkt->dts == AV_NOPTS_VALUE) {
opkt.dts = av_rescale_q(input_st->cur_dts-startDts, AV_TIME_BASE_Q, video_st->time_base);
} else {
opkt.dts = av_rescale_q(ipkt->dts-startDts, input_st->time_base, video_st->time_base);
}
opkt.dts -= ost_tb_start_time;
opkt.duration = av_rescale_q(ipkt->duration, input_st->time_base, video_st->time_base);
opkt.flags = ipkt->flags;
opkt.pos=-1;
opkt.data = ipkt->data;
opkt.size = ipkt->size;
// Some camera have audio on stream 0 and video on stream 1. So when we remove the audio, video stream has to go on 0
if ( ipkt->stream_index > 0 and ! audio_st ) {
Debug(1,"Setting stream index to 0 instead of %d", ipkt->stream_index );
opkt.stream_index = 0;
} else {
opkt.stream_index = ipkt->stream_index;
}
/*opkt.flags |= AV_PKT_FLAG_KEY;*/
if (video_st->codec->codec_type == AVMEDIA_TYPE_VIDEO && (fmt->flags & AVFMT_RAWPICTURE)) {
/* store AVPicture in AVPacket, as expected by the output format */
avpicture_fill(&pict, opkt.data, video_st->codec->pix_fmt, video_st->codec->width, video_st->codec->height);
opkt.data = (uint8_t *)&pict;
opkt.size = sizeof(AVPicture);
opkt.flags |= AV_PKT_FLAG_KEY;
}
memcpy(&safepkt, &opkt, sizeof(AVPacket));
if ((opkt.data == NULL)||(opkt.size < 1)) {
Warning("%s:%d: Mangled AVPacket: discarding frame", __FILE__, __LINE__ );
dumpPacket(&opkt);
} else if ((prevDts > 0) && (prevDts >= opkt.dts)) {
Warning("%s:%d: DTS out of order: %lld \u226E %lld; discarding frame", __FILE__, __LINE__, prevDts, opkt.dts);
prevDts = opkt.dts;
dumpPacket(&opkt);
} else {
int ret;
prevDts = opkt.dts; // Unsure if av_interleaved_write_frame() clobbers opkt.dts when out of order, so storing in advance
ret = av_interleaved_write_frame(oc, &opkt);
if(ret<0){
// There's nothing we can really do if the frame is rejected, just drop it and get on with the next
Warning("%s:%d: Writing frame [av_interleaved_write_frame()] failed: %s(%d) ", __FILE__, __LINE__, av_make_error_string(ret).c_str(), (ret));
dumpPacket(&safepkt);
}
}
av_free_packet(&opkt);
return 0;
}
int VideoStore::writeAudioFramePacket(AVPacket *ipkt, AVStream *input_st){
if(!audio_st) {
Error("Called writeAudioFramePacket when no audio_st");
return -1;//FIXME -ve return codes do not free packet in ffmpeg_camera at the moment
}
/*if(!keyframeMessage)
return -1;*/
//zm_dump_stream_format( oc, ipkt->stream_index, 0, 1 );
// What is this doing? Getting the time of the start of this video chunk? Does that actually make sense?
int64_t ost_tb_start_time = av_rescale_q(startTime, AV_TIME_BASE_Q, audio_st->time_base);
AVPacket opkt;
av_init_packet(&opkt);
Debug(3, "after init packet" );
//Scale the PTS of the outgoing packet to be the correct time base
if (ipkt->pts != AV_NOPTS_VALUE) {
Debug(3, "Rescaling output pts");
opkt.pts = av_rescale_q(ipkt->pts-startPts, input_st->time_base, audio_st->time_base) - ost_tb_start_time;
} else {
Debug(3, "Setting output pts to AV_NOPTS_VALUE");
opkt.pts = AV_NOPTS_VALUE;
}
//Scale the DTS of the outgoing packet to be the correct time base
if(ipkt->dts == AV_NOPTS_VALUE) {
Debug(4, "ipkt->dts == AV_NOPTS_VALUE %d to %d", AV_NOPTS_VALUE, opkt.dts );
opkt.dts = av_rescale_q(input_st->cur_dts-startDts, AV_TIME_BASE_Q, audio_st->time_base);
Debug(4, "ipkt->dts == AV_NOPTS_VALUE %d to %d", AV_NOPTS_VALUE, opkt.dts );
} else {
Debug(4, "ipkt->dts != AV_NOPTS_VALUE %d to %d", AV_NOPTS_VALUE, opkt.dts );
opkt.dts = av_rescale_q(ipkt->dts-startDts, input_st->time_base, audio_st->time_base);
Debug(4, "ipkt->dts != AV_NOPTS_VALUE %d to %d", AV_NOPTS_VALUE, opkt.dts );
}
opkt.dts -= ost_tb_start_time;
// Seems like it would be really weird for the codec type to NOT be audiu
if (audio_st->codec->codec_type == AVMEDIA_TYPE_AUDIO && ipkt->dts != AV_NOPTS_VALUE) {
Debug( 4, "code is audio, dts != AV_NOPTS_VALUE " );
int duration = av_get_audio_frame_duration(input_st->codec, ipkt->size);
if(!duration)
duration = input_st->codec->frame_size;
//FIXME where to get filter_in_rescale_delta_last
//FIXME av_rescale_delta doesn't exist in ubuntu vivid libavtools
opkt.dts = opkt.pts = av_rescale_delta(input_st->time_base, ipkt->dts,
(AVRational){1, input_st->codec->sample_rate}, duration, &filter_in_rescale_delta_last,
audio_st->time_base) - ost_tb_start_time;
}
opkt.duration = av_rescale_q(ipkt->duration, input_st->time_base, audio_st->time_base);
opkt.pos=-1;
opkt.flags = ipkt->flags;
opkt.data = ipkt->data;
opkt.size = ipkt->size;
opkt.stream_index = ipkt->stream_index;
int ret;
ret = av_interleaved_write_frame(oc, &opkt);
if(ret!=0){
Fatal("Error encoding audio frame packet: %s\n", av_make_error_string(ret).c_str());
}
Debug(4,"Success writing audio frame" );
av_free_packet(&opkt);
return 0;
}

53
src/zm_videostore.h Normal file
View File

@ -0,0 +1,53 @@
#ifndef ZM_VIDEOSTORE_H
#define ZM_VIDEOSTORE_H
#include "zm_ffmpeg.h"
#if HAVE_LIBAVCODEC
#include "zm_monitor.h"
class VideoStore {
private:
AVOutputFormat *fmt;
AVFormatContext *oc;
AVStream *video_st;
AVStream *audio_st;
const char *filename;
const char *format;
bool keyframeMessage;
int keyframeSkipNumber;
int64_t startTime;
int64_t startPts;
int64_t startDts;
int64_t prevDts;
int64_t filter_in_rescale_delta_last;
public:
VideoStore(const char *filename_in, const char *format_in, AVStream *input_st, AVStream *inpaud_st, int64_t nStartTime, Monitor::Orientation p_orientation );
~VideoStore();
int writeVideoFramePacket(AVPacket *pkt, AVStream *input_st);//, AVPacket *lastKeyframePkt);
int writeAudioFramePacket(AVPacket *pkt, AVStream *input_st);
void dumpPacket( AVPacket *pkt );
};
/*
class VideoEvent {
public:
VideoEvent(unsigned int eid);
~VideoEvent();
int createEventImage(unsigned int fid, char *&pBuff);
private:
unsigned int m_eid;
};*/
#endif //havelibav
#endif //zm_videostore_h

View File

@ -1,9 +1,10 @@
#!/bin/bash #!/bin/bash
# Start MySQL # Start MySQL
test -e /var/run/mysqld || install -m 755 -o mysql -g root -d /var/run/mysqld # For Xenial the following won't start mysqld
su - mysql -s /bin/sh -c "/usr/bin/mysqld_safe > /dev/null 2>&1 &" #/usr/bin/mysqld_safe &
# Use this instead:
service mysql start
# Give MySQL time to wake up # Give MySQL time to wake up
SECONDS_LEFT=120 SECONDS_LEFT=120

13
web/.editorconfig Normal file
View File

@ -0,0 +1,13 @@
; This file is for unifying the coding style for different editors and IDEs.
; More information at http://editorconfig.org
root = true
[*]
indent_style = tab
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
[*.bat]
end_of_line = crlf

22
web/.gitignore vendored Normal file
View File

@ -0,0 +1,22 @@
# User specific & automatically generated files #
#################################################
/app/Config/database.php
/app/tmp
/lib/Cake/Console/Templates/skel/tmp/
/plugins
/vendors
/build
/dist
/tags
/app/webroot/events
# OS generated files #
######################
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
Icon?
ehthumbs.db
Thumbs.db

5
web/.htaccess Normal file
View File

@ -0,0 +1,5 @@
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteRule ^$ app/webroot/ [L]
RewriteRule (.*) app/webroot/$1 [L]
</IfModule>

116
web/.travis.yml Normal file
View File

@ -0,0 +1,116 @@
language: php
php:
- 5.2
- 5.3
- 5.4
env:
- DB=mysql
- DB=pgsql
- DB=sqlite
matrix:
include:
- php: 5.4
env:
- PHPCS=1
before_script:
- sh -c "if [ '$DB' = 'mysql' ]; then mysql -e 'CREATE DATABASE cakephp_test;'; fi"
- sh -c "if [ '$DB' = 'mysql' ]; then mysql -e 'CREATE DATABASE cakephp_test2;'; fi"
- sh -c "if [ '$DB' = 'mysql' ]; then mysql -e 'CREATE DATABASE cakephp_test3;'; fi"
- sh -c "if [ '$DB' = 'pgsql' ]; then psql -c 'CREATE DATABASE cakephp_test;' -U postgres; fi"
- sh -c "if [ '$DB' = 'pgsql' ]; then psql -c 'CREATE SCHEMA test2;' -U postgres -d cakephp_test; fi"
- sh -c "if [ '$DB' = 'pgsql' ]; then psql -c 'CREATE SCHEMA test3;' -U postgres -d cakephp_test; fi"
- chmod -R 777 ./app/tmp
- sudo apt-get install lighttpd
- pear channel-discover pear.cakephp.org
- pear install --alldeps cakephp/CakePHP_CodeSniffer
- phpenv rehash
- set +H
- echo "<?php
class DATABASE_CONFIG {
private \$identities = array(
'mysql' => array(
'datasource' => 'Database/Mysql',
'host' => '0.0.0.0',
'login' => 'travis'
),
'pgsql' => array(
'datasource' => 'Database/Postgres',
'host' => '127.0.0.1',
'login' => 'postgres',
'database' => 'cakephp_test',
'schema' => array(
'default' => 'public',
'test' => 'public',
'test2' => 'test2',
'test_database_three' => 'test3'
)
),
'sqlite' => array(
'datasource' => 'Database/Sqlite',
'database' => array(
'default' => ':memory:',
'test' => ':memory:',
'test2' => '/tmp/cakephp_test2.db',
'test_database_three' => '/tmp/cakephp_test3.db'
),
)
);
public \$default = array(
'persistent' => false,
'host' => '',
'login' => '',
'password' => '',
'database' => 'cakephp_test',
'prefix' => ''
);
public \$test = array(
'persistent' => false,
'host' => '',
'login' => '',
'password' => '',
'database' => 'cakephp_test',
'prefix' => ''
);
public \$test2 = array(
'persistent' => false,
'host' => '',
'login' => '',
'password' => '',
'database' => 'cakephp_test2',
'prefix' => ''
);
public \$test_database_three = array(
'persistent' => false,
'host' => '',
'login' => '',
'password' => '',
'database' => 'cakephp_test3',
'prefix' => ''
);
public function __construct() {
\$db = 'mysql';
if (!empty(\$_SERVER['DB'])) {
\$db = \$_SERVER['DB'];
}
foreach (array('default', 'test', 'test2', 'test_database_three') as \$source) {
\$config = array_merge(\$this->{\$source}, \$this->identities[\$db]);
if (is_array(\$config['database'])) {
\$config['database'] = \$config['database'][\$source];
}
if (!empty(\$config['schema']) && is_array(\$config['schema'])) {
\$config['schema'] = \$config['schema'][\$source];
}
\$this->{\$source} = \$config;
}
}
}" > app/Config/database.php
script:
- sh -c "if [ '$PHPCS' != '1' ]; then ./lib/Cake/Console/cake test core AllTests --stderr; else phpcs -p --extensions=php --standard=CakePHP ./lib/Cake; fi"
notifications:
email: false

View File

@ -18,7 +18,7 @@ if(NOT (CMAKE_BINARY_DIR STREQUAL CMAKE_SOURCE_DIR))
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/api/app/Config/core.php" DESTINATION "${ZM_WEBDIR}/api/app/Config") install(FILES "${CMAKE_CURRENT_BINARY_DIR}/api/app/Config/core.php" DESTINATION "${ZM_WEBDIR}/api/app/Config")
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/api/app/Config/database.php" DESTINATION "${ZM_WEBDIR}/api/app/Config") install(FILES "${CMAKE_CURRENT_BINARY_DIR}/api/app/Config/database.php" DESTINATION "${ZM_WEBDIR}/api/app/Config")
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/api/app/Config/bootstrap.php" DESTINATION "${ZM_WEBDIR}/api/app/Config") install(FILES "${CMAKE_CURRENT_BINARY_DIR}/api/app/Config/bootstrap.php" DESTINATION "${ZM_WEBDIR}/api/app/Config")
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/api/lib/Cake/bootstrap.php" DESTINATION "${ZM_WEBDIR}/api/lib/Cake") install(FILES "${CMAKE_CURRENT_BINARY_DIR}/api/lib/Cake/bootstrap.php" DESTINATION "${ZM_WEBDIR}/api/lib/Cake")
endif(NOT (CMAKE_BINARY_DIR STREQUAL CMAKE_SOURCE_DIR)) endif(NOT (CMAKE_BINARY_DIR STREQUAL CMAKE_SOURCE_DIR))
# Install the mootools symlinks (if its not in the source directory) # Install the mootools symlinks (if its not in the source directory)

View File

@ -71,9 +71,9 @@ $statusData = array(
"DefaultScale" => true, "DefaultScale" => true,
"WebColour" => true, "WebColour" => true,
"Sequence" => true, "Sequence" => true,
"MinEventId" => array( "sql" => "min(Events.Id)", "table" => "Events", "join" => "Events.MonitorId = Monitors.Id", "group" => "Events.MonitorId" ), "MinEventId" => array( "sql" => "(SELECT min(Events.Id) FROM Events WHERE Events.MonitorId = Monitors.Id" ),
"MaxEventId" => array( "sql" => "max(Events.Id)", "table" => "Events", "join" => "Events.MonitorId = Monitors.Id", "group" => "Events.MonitorId" ), "MaxEventId" => array( "sql" => "(SELECT max(Events.Id) FROM Events WHERE Events.MonitorId = Monitors.Id" ),
"TotalEvents" => array( "sql" => "count(Events.Id)", "table" => "Events", "join" => "Events.MonitorId = Monitors.Id", "group" => "Events.MonitorId" ), "TotalEvents" => array( "sql" => "(SELECT count(Events.Id) FROM Events WHERE Events.MonitorId = Monitors.Id" ),
"Status" => array( "zmu" => "-m ".escapeshellarg($_REQUEST['id'][0])." -s" ), "Status" => array( "zmu" => "-m ".escapeshellarg($_REQUEST['id'][0])." -s" ),
"FrameRate" => array( "zmu" => "-m ".escapeshellarg($_REQUEST['id'][0])." -f" ), "FrameRate" => array( "zmu" => "-m ".escapeshellarg($_REQUEST['id'][0])." -f" ),
), ),
@ -117,6 +117,7 @@ $statusData = array(
"Height" => true, "Height" => true,
"Length" => true, "Length" => true,
"Frames" => true, "Frames" => true,
"DefaultVideo" => true,
"AlarmFrames" => true, "AlarmFrames" => true,
"TotScore" => true, "TotScore" => true,
"AvgScore" => true, "AvgScore" => true,
@ -128,10 +129,10 @@ $statusData = array(
"Messaged" => true, "Messaged" => true,
"Executed" => true, "Executed" => true,
"Notes" => true, "Notes" => true,
"MinFrameId" => array( "sql" => "min(Frames.FrameId)", "table" => "Frames", "join" => "Events.Id = Frames.EventId", "group" => "Frames.EventId" ), "MinFrameId" => array( "sql" => "(SELECT min(Frames.FrameId) FROM Frames WHERE EventId=Events.Id)" ),
"MaxFrameId" => array( "sql" => "max(Frames.FrameId)", "table" => "Frames", "join" => "Events.Id = Frames.EventId", "group" => "Frames.EventId" ), "MaxFrameId" => array( "sql" => "(SELECT max(Frames.FrameId) FROM Frames WHERE Events.Id = Frames.EventId)" ),
"MinFrameDelta" => array( "sql" => "min(Frames.Delta)", "table" => "Frames", "join" => "Events.Id = Frames.EventId", "group" => "Frames.EventId" ), "MinFrameDelta" => array( "sql" => "(SELECT min(Frames.Delta) FROM Frames WHERE Events.Id = Frames.EventId)" ),
"MaxFrameDelta" => array( "sql" => "max(Frames.Delta)", "table" => "Frames", "join" => "Events.Id = Frames.EventId", "group" => "Frames.EventId" ), "MaxFrameDelta" => array( "sql" => "(SELECT max(Frames.Delta) FROM Frames WHERE Events.Id = Frames.EventId)" ),
//"Path" => array( "postFunc" => "getEventPath" ), //"Path" => array( "postFunc" => "getEventPath" ),
), ),
), ),
@ -391,7 +392,7 @@ function getNearEvents()
{ {
if ( $id == $eventId ) if ( $id == $eventId )
{ {
$prevId = dbFetchNext( $result, 'Id' ); $prevEvent = dbFetchNext( $result );
break; break;
} }
} }
@ -402,14 +403,16 @@ function getNearEvents()
{ {
if ( $id == $eventId ) if ( $id == $eventId )
{ {
$nextId = dbFetchNext( $result, 'Id' ); $nextEvent = dbFetchNext( $result );
break; break;
} }
} }
$result = array( 'EventId'=>$eventId ); $result = array( 'EventId'=>$eventId );
$result['PrevEventId'] = empty($prevId)?0:$prevId; $result['PrevEventId'] = empty($prevEvent)?0:$prevEvent['Id'];
$result['NextEventId'] = empty($nextId)?0:$nextId; $result['NextEventId'] = empty($nextEvent)?0:$nextEvent['Id'];
$result['PrevEventDefVideoPath'] = empty($prevEvent)?0:(getEventDefaultVideoPath($prevEvent));
$result['NextEventDefVideoPath'] = empty($nextEvent)?0:(getEventDefaultVideoPath($nextEvent));
return( $result ); return( $result );
} }

View File

@ -527,6 +527,7 @@ if ( !empty($action) )
'DoNativeMotDet' => 'toggle', 'DoNativeMotDet' => 'toggle',
'Exif' => 'toggle', 'Exif' => 'toggle',
'RTSPDescribe' => 'toggle', 'RTSPDescribe' => 'toggle',
'RecordAudio' => 'toggle',
); );
$columns = getTableColumns( 'Monitors' ); $columns = getTableColumns( 'Monitors' );

View File

@ -449,6 +449,9 @@ function getEventPath( $event ) {
return( $eventPath ); return( $eventPath );
} }
function getEventDefaultVideoPath( $event ) {
return ZM_DIR_EVENTS . "/" . getEventPath($event) . "/" . $event['DefaultVideo'];
}
function deletePath( $path ) { function deletePath( $path ) {
if ( is_dir( $path ) ) { if ( is_dir( $path ) ) {
@ -970,21 +973,33 @@ function getImageSrc( $event, $frame, $scale=SCALE_BASE, $captureOnly=false, $ov
if ( !is_array($frame) ) if ( !is_array($frame) )
$frame = array( 'FrameId'=>$frame, 'Type'=>'' ); $frame = array( 'FrameId'=>$frame, 'Type'=>'' );
//echo "S:$scale, CO:$captureOnly<br>"; if ( file_exists( $eventPath.'/snapshot.jpg' ) ) {
$captImage = sprintf( "%0".ZM_EVENT_IMAGE_DIGITS."d-capture.jpg", $frame['FrameId'] ); $captImage = "snapshot.jpg";
} else {
$captImage = sprintf( "%0".ZM_EVENT_IMAGE_DIGITS."d-capture.jpg", $frame['FrameId'] );
if ( ! file_exists( $eventPath.'/'.$captImage ) ) {
# Generate the frame JPG
if ( $event['DefaultVideo'] ) {
$command ='ffmpeg -v 0 -i '.$eventPath.'/'.$event['DefaultVideo'].' -vf "select=gte(n\\,'.$frame['FrameId'].'),setpts=PTS-STARTPTS" '.$eventPath.'/'.$captImage;
system( $command, $retval );
} else {
Error("Can't create frame images from video because there is no video file for this event " );
}
}
}
$captPath = $eventPath.'/'.$captImage; $captPath = $eventPath.'/'.$captImage;
$thumbCaptPath = ZM_DIR_IMAGES.'/'.$event['Id'].'-'.$captImage; $thumbCaptPath = ZM_DIR_IMAGES.'/'.$event['Id'].'-'.$captImage;
//echo "CI:$captImage, CP:$captPath, TCP:$thumbCaptPath<br>"; //echo "CI:$captImage, CP:$captPath, TCP:$thumbCaptPath<br>";
$analImage = sprintf( "%0".ZM_EVENT_IMAGE_DIGITS."d-analyse.jpg", $frame['FrameId'] ); $analImage = sprintf( "%0".ZM_EVENT_IMAGE_DIGITS."d-analyse.jpg", $frame['FrameId'] );
$analPath = $eventPath.'/'.$analImage; $analPath = $eventPath.'/'.$analImage;
$analFile = ZM_DIR_EVENTS."/".$analPath;
$thumbAnalPath = ZM_DIR_IMAGES.'/'.$event['Id'].'-'.$analImage; $thumbAnalPath = ZM_DIR_IMAGES.'/'.$event['Id'].'-'.$analImage;
//echo "AI:$analImage, AP:$analPath, TAP:$thumbAnalPath<br>"; //echo "AI:$analImage, AP:$analPath, TAP:$thumbAnalPath<br>";
$alarmFrame = $frame['Type']=='Alarm'; $alarmFrame = $frame['Type']=='Alarm';
$hasAnalImage = $alarmFrame && file_exists( $analFile ) && filesize( $analFile ); $hasAnalImage = $alarmFrame && file_exists( $analPath ) && filesize( $analPath );
$isAnalImage = $hasAnalImage && !$captureOnly; $isAnalImage = $hasAnalImage && !$captureOnly;
if ( !ZM_WEB_SCALE_THUMBS || $scale >= SCALE_BASE || !function_exists( 'imagecreatefromjpeg' ) ) { if ( !ZM_WEB_SCALE_THUMBS || $scale >= SCALE_BASE || !function_exists( 'imagecreatefromjpeg' ) ) {
@ -1009,21 +1024,19 @@ function getImageSrc( $event, $frame, $scale=SCALE_BASE, $captureOnly=false, $ov
$thumbPath = $thumbCaptPath; $thumbPath = $thumbCaptPath;
} }
$imageFile = ZM_DIR_EVENTS."/".$imagePath;
//$thumbFile = ZM_DIR_EVENTS."/".$thumbPath;
$thumbFile = $thumbPath; $thumbFile = $thumbPath;
if ( $overwrite || !file_exists( $thumbFile ) || !filesize( $thumbFile ) ) { if ( $overwrite || !file_exists( $thumbFile ) || !filesize( $thumbFile ) ) {
// Get new dimensions // Get new dimensions
list( $imageWidth, $imageHeight ) = getimagesize( $imageFile ); list( $imageWidth, $imageHeight ) = getimagesize( $imagePath );
$thumbWidth = $imageWidth * $fraction; $thumbWidth = $imageWidth * $fraction;
$thumbHeight = $imageHeight * $fraction; $thumbHeight = $imageHeight * $fraction;
// Resample // Resample
$thumbImage = imagecreatetruecolor( $thumbWidth, $thumbHeight ); $thumbImage = imagecreatetruecolor( $thumbWidth, $thumbHeight );
$image = imagecreatefromjpeg( $imageFile ); $image = imagecreatefromjpeg( $imagePath );
imagecopyresampled( $thumbImage, $image, 0, 0, 0, 0, $thumbWidth, $thumbHeight, $imageWidth, $imageHeight ); imagecopyresampled( $thumbImage, $image, 0, 0, 0, 0, $thumbWidth, $thumbHeight, $imageWidth, $imageHeight );
if ( !imagejpeg( $thumbImage, $thumbFile ) ) if ( !imagejpeg( $thumbImage, $thumbPath ) )
Error( "Can't create thumbnail '$thumbPath'" ); Error( "Can't create thumbnail '$thumbPath'" );
} }
} }
@ -1032,15 +1045,13 @@ function getImageSrc( $event, $frame, $scale=SCALE_BASE, $captureOnly=false, $ov
'eventPath' => $eventPath, 'eventPath' => $eventPath,
'imagePath' => $imagePath, 'imagePath' => $imagePath,
'thumbPath' => $thumbPath, 'thumbPath' => $thumbPath,
'imageFile' => $imageFile, 'imageFile' => $imagePath,
'thumbFile' => $thumbFile, 'thumbFile' => $thumbFile,
'imageClass' => $alarmFrame?"alarm":"normal", 'imageClass' => $alarmFrame?"alarm":"normal",
'isAnalImage' => $isAnalImage, 'isAnalImage' => $isAnalImage,
'hasAnalImage' => $hasAnalImage, 'hasAnalImage' => $hasAnalImage,
); );
//echo "IP:$imagePath<br>";
//echo "TP:$thumbPath<br>";
return( $imageData ); return( $imageData );
} }

View File

@ -0,0 +1,72 @@
console.log('zoomrotate: Start');
(function(){
var defaults, extend;
console.log('zoomrotate: Init defaults');
defaults = {
zoom: 1,
rotate: 0
};
console.log('zoomrotate: Init Extend');
extend = function() {
var args, target, i, object, property;
args = Array.prototype.slice.call(arguments);
target = args.shift() || {};
for (i in args) {
object = args[i];
for (property in object) {
if (object.hasOwnProperty(property)) {
if (typeof object[property] === 'object') {
target[property] = extend(target[property], object[property]);
} else {
target[property] = object[property];
}
}
}
}
return target;
};
/**
* register the zoomrotate plugin
*/
videojs.plugin('zoomrotate', function(options){
console.log('zoomrotate: Register init');
var settings, player, video, poster;
settings = extend(defaults, options);
/* Grab the necessary DOM elements */
player = this.el();
video = this.el().getElementsByTagName('video')[0];
poster = this.el().getElementsByTagName('div')[1]; // div vjs-poster
console.log('zoomrotate: '+video.style);
console.log('zoomrotate: '+poster.style);
console.log('zoomrotate: '+options.rotate);
console.log('zoomrotate: '+options.zoom);
/* Array of possible browser specific settings for transformation */
var properties = ['transform', 'WebkitTransform', 'MozTransform',
'msTransform', 'OTransform'],
prop = properties[0];
/* Iterators */
var i,j;
/* Find out which CSS transform the browser supports */
for(i=0,j=properties.length;i<j;i++){
if(typeof player.style[properties[i]] !== 'undefined'){
prop = properties[i];
break;
}
}
/* Let's do it */
player.style.overflow = 'hidden';
video.style[prop]='scale('+options.zoom+') rotate('+options.rotate+'deg)';
poster.style[prop]='scale('+options.zoom+') rotate('+options.rotate+'deg)';
console.log('zoomrotate: Register end');
});
})();
console.log('zoomrotate: End');

View File

@ -545,6 +545,7 @@ $SLANG = array(
'OpNe' => 'not equal to', 'OpNe' => 'not equal to',
'OpNotIn' => 'not in set', 'OpNotIn' => 'not in set',
'OpNotMatches' => 'does not match', 'OpNotMatches' => 'does not match',
'OptionalEncoderParam' => 'Optional Encoder Parameters',
'OptionHelp' => 'Option Help', 'OptionHelp' => 'Option Help',
'OptionRestartWarning' => 'These changes may not come into effect fully\nwhile the system is running. When you have\nfinished making your changes please ensure that\nyou restart ZoneMinder.', 'OptionRestartWarning' => 'These changes may not come into effect fully\nwhile the system is running. When you have\nfinished making your changes please ensure that\nyou restart ZoneMinder.',
'Options' => 'Options', 'Options' => 'Options',
@ -585,6 +586,7 @@ $SLANG = array(
'Protocol' => 'Protocol', 'Protocol' => 'Protocol',
'Rate' => 'Rate', 'Rate' => 'Rate',
'RecaptchaWarning' => 'Your reCaptcha secret key is invalid. Please correct it, or reCaptcha will not work', // added Sep 24 2015 - PP 'RecaptchaWarning' => 'Your reCaptcha secret key is invalid. Please correct it, or reCaptcha will not work', // added Sep 24 2015 - PP
'RecordAudio' => 'Whether to store the audio stream when saving an event.',
'Real' => 'Real', 'Real' => 'Real',
'Record' => 'Record', 'Record' => 'Record',
'RefImageBlendPct' => 'Reference Image Blend %ge', 'RefImageBlendPct' => 'Reference Image Blend %ge',
@ -620,6 +622,7 @@ $SLANG = array(
'RunState' => 'Run State', 'RunState' => 'Run State',
'SaveAs' => 'Save as', 'SaveAs' => 'Save as',
'SaveFilter' => 'Save Filter', 'SaveFilter' => 'Save Filter',
'SaveJPEGs' => 'Save JPEGs',
'Save' => 'Save', 'Save' => 'Save',
'Scale' => 'Scale', 'Scale' => 'Scale',
'Score' => 'Score', 'Score' => 'Score',
@ -728,6 +731,7 @@ $SLANG = array(
'VideoGenParms' => 'Video Generation Parameters', 'VideoGenParms' => 'Video Generation Parameters',
'VideoGenSucceeded' => 'Video Generation Succeeded!', 'VideoGenSucceeded' => 'Video Generation Succeeded!',
'VideoSize' => 'Video Size', 'VideoSize' => 'Video Size',
'VideoWriter' => 'Video Writer',
'Video' => 'Video', 'Video' => 'Video',
'ViewAll' => 'View All', 'ViewAll' => 'View All',
'ViewEvent' => 'View Event', 'ViewEvent' => 'View Event',

View File

@ -65,6 +65,23 @@
visibility: hidden; visibility: hidden;
} }
#videoBar1 div {
text-align: center;
float: center;
}
#videoBar1 #prevEvent {
float: left;
}
#videoBar1 #dlEvent {
float: center;
}
#videoBar1 #nextEvent {
float: right;
}
#imageFeed { #imageFeed {
text-align: center; text-align: center;
} }

View File

@ -243,3 +243,64 @@
height: 10px; height: 10px;
background-color: #444444; background-color: #444444;
} }
#eventVideo {
position: relative;
}
#video-controls {
position: absolute;
bottom: 0;
left: 0;
right: 0;
padding: 5px;
opacity: 0;
-webkit-transition: opacity .3s;
-moz-transition: opacity .3s;
-o-transition: opacity .3s;
-ms-transition: opacity .3s;
transition: opacity .3s;
background-image: linear-gradient(bottom, rgb(3,113,168) 13%, rgb(0,136,204) 100%);
background-image: -o-linear-gradient(bottom, rgb(3,113,168) 13%, rgb(0,136,204) 100%);
background-image: -moz-linear-gradient(bottom, rgb(3,113,168) 13%, rgb(0,136,204) 100%);
background-image: -webkit-linear-gradient(bottom, rgb(3,113,168) 13%, rgb(0,136,204) 100%);
background-image: -ms-linear-gradient(bottom, rgb(3,113,168) 13%, rgb(0,136,204) 100%);
background-image: -webkit-gradient(
linear,
left bottom,
left top,
color-stop(0.13, rgb(3,113,168)),
color-stop(1, rgb(0,136,204))
);
}
#eventVideo:hover #video-controls {
opacity: .9;
}
button {
background: rgba(0,0,0,.5);
border: 0;
color: #EEE;
-webkit-border-radius: 3px;
-moz-border-radius: 3px;
-o-border-radius: 3px;
border-radius: 3px;
padding: 0;
}
button:hover {
cursor: pointer;
}
#seekbar {
width: 360px;
border: 0;
padding: 0;
}
#volume-bar {
width: 60px;
border: 0;
padding: 0;
}

View File

@ -286,3 +286,38 @@ if ( focusWindow )
} }
window.addEvent( 'domready', checkSize); window.addEvent( 'domready', checkSize);
function convertLabelFormat(LabelFormat, monitorName){
//convert label format from strftime to moment's format (modified from
//https://raw.githubusercontent.com/benjaminoakes/moment-strftime/master/lib/moment-strftime.js
//added %f and %N below (TODO: add %Q)
var replacements = { a: 'ddd', A: 'dddd', b: 'MMM', B: 'MMMM', d: 'DD', e: 'D', F: 'YYYY-MM-DD', H: 'HH', I: 'hh', j: 'DDDD', k: 'H', l: 'h', m: 'MM', M: 'mm', p: 'A', S: 'ss', u: 'E', w: 'd', W: 'WW', y: 'YY', Y: 'YYYY', z: 'ZZ', Z: 'z', 'f': 'SS', 'N': "["+monitorName+"]", '%': '%' };
var momentLabelFormat = Object.keys(replacements).reduce(function (momentFormat, key) {
var value = replacements[key];
return momentFormat.replace("%" + key, value);
}, LabelFormat);
return momentLabelFormat;
}
function addVideoTimingTrack(video, LabelFormat, monitorName, duration, startTime){
var labelFormat = convertLabelFormat(LabelFormat, monitorName);
var webvttformat = 'HH:mm:ss.SSS', webvttdata="WEBVTT\n\n";
startTime = moment(startTime);
var seconds = moment({s:0}), endduration = moment({s:duration});
while(seconds.isBefore(endduration)){
webvttdata += seconds.format(webvttformat) + " --> ";
seconds.add(1,'s');
webvttdata += seconds.format(webvttformat) + "\n";
webvttdata += startTime.format(labelFormat) + "\n\n";
startTime.add(1, 's');
}
var track = document.createElement('track');
track.kind = "captions";
track.srclang = "en";
track.label = "English";
track['default'] = true;
track.src = 'data:plain/text;charset=utf-8,'+encodeURIComponent(webvttdata);
video.appendChild(track);
}

View File

@ -27,7 +27,7 @@ if ( !canView( 'Events' ) )
$eid = validInt( $_REQUEST['eid'] ); $eid = validInt( $_REQUEST['eid'] );
$fid = !empty($_REQUEST['fid'])?validInt($_REQUEST['fid']):1; $fid = !empty($_REQUEST['fid'])?validInt($_REQUEST['fid']):1;
$sql = 'SELECT E.*,M.Name AS MonitorName,E.Width,E.Height,M.DefaultRate,M.DefaultScale FROM Events AS E INNER JOIN Monitors AS M ON E.MonitorId = M.Id WHERE E.Id = ?'; $sql = 'SELECT E.*,M.Name AS MonitorName,E.Width,E.Height,M.DefaultRate,M.DefaultScale,M.VideoWriter,M.SaveJPEGs,M.Orientation,M.LabelFormat FROM Events AS E INNER JOIN Monitors AS M ON E.MonitorId = M.Id WHERE E.Id = ?';
$sql_values = array( $eid ); $sql_values = array( $eid );
if ( $user['MonitorIds'] ) { if ( $user['MonitorIds'] ) {
@ -59,7 +59,7 @@ $replayModes = array(
if ( isset( $_REQUEST['streamMode'] ) ) if ( isset( $_REQUEST['streamMode'] ) )
$streamMode = validHtmlStr($_REQUEST['streamMode']); $streamMode = validHtmlStr($_REQUEST['streamMode']);
else else
$streamMode = canStream()?'stream':'stills'; $streamMode = 'video';
if ( isset( $_REQUEST['replayMode'] ) ) if ( isset( $_REQUEST['replayMode'] ) )
$replayMode = validHtmlStr($_REQUEST['replayMode']); $replayMode = validHtmlStr($_REQUEST['replayMode']);
@ -70,6 +70,15 @@ else {
$replayMode = array_shift( $keys ); $replayMode = array_shift( $keys );
} }
// videojs zoomrotate only when direct recording
$Zoom = 1;
$Rotation = 0;
if ( $event['VideoWriter'] == "2" ) {
$Rotation = $event['Orientation'];
if ( in_array($event['Orientation'],array("90","270")))
$Zoom = $event['Height']/$event['Width'];
}
parseSort(); parseSort();
parseFilter( $_REQUEST['filter'] ); parseFilter( $_REQUEST['filter'] );
$filterQuery = $_REQUEST['filter']['query']; $filterQuery = $_REQUEST['filter']['query'];
@ -112,35 +121,54 @@ if ( canEdit( 'Events' ) )
?> ?>
<div id="deleteEvent"><a href="#" onclick="deleteEvent()"><?php echo translate('Delete') ?></a></div> <div id="deleteEvent"><a href="#" onclick="deleteEvent()"><?php echo translate('Delete') ?></a></div>
<div id="editEvent"><a href="#" onclick="editEvent()"><?php echo translate('Edit') ?></a></div> <div id="editEvent"><a href="#" onclick="editEvent()"><?php echo translate('Edit') ?></a></div>
<div id="archiveEvent" class="hidden"><a href="#" onclick="archiveEvent()"><?php echo translate('Archive') ?></a></div>
<div id="unarchiveEvent" class="hidden"><a href="#" onclick="unarchiveEvent()"><?php echo translate('Unarchive') ?></a></div>
<?php <?php
} }
if ( canView( 'Events' ) ) if ( canView( 'Events' ) )
{ {
?> ?>
<div id="exportEvent"><a href="#" onclick="exportEvent()"><?php echo translate('Export') ?></a></div> <div id="framesEvent"><a href="#" onclick="showEventFrames()"><?php echo translate('Frames') ?></a></div>
<?php <?php
} if ( $event['SaveJPEGs'] & 3 )
if ( canEdit( 'Events' ) )
{ {
?> ?>
<div id="archiveEvent" class="hidden"><a href="#" onclick="archiveEvent()"><?php echo translate('Archive') ?></a></div>
<div id="unarchiveEvent" class="hidden"><a href="#" onclick="unarchiveEvent()"><?php echo translate('Unarchive') ?></a></div>
<?php
}
?>
<div id="framesEvent"><a href="#" onclick="showEventFrames()"><?php echo translate('Frames') ?></a></div>
<div id="streamEvent"<?php if ( $streamMode == 'stream' ) { ?> class="hidden"<?php } ?>><a href="#" onclick="showStream()"><?php echo translate('Stream') ?></a></div>
<div id="stillsEvent"<?php if ( $streamMode == 'still' ) { ?> class="hidden"<?php } ?>><a href="#" onclick="showStills()"><?php echo translate('Stills') ?></a></div> <div id="stillsEvent"<?php if ( $streamMode == 'still' ) { ?> class="hidden"<?php } ?>><a href="#" onclick="showStills()"><?php echo translate('Stills') ?></a></div>
<?php <?php
if ( ZM_OPT_FFMPEG )
{
?>
<div id="videoEvent"><a href="#" onclick="videoEvent()"><?php echo translate('Video') ?></a></div>
<?php
} }
?> ?>
<div id="videoEvent"<?php if ( $streamMode == 'video' ) { ?> class="hidden"<?php } ?>><a href="#" onclick="showVideo()"><?php echo translate('Video') ?></a></div>
<div id="exportEvent"><a href="#" onclick="exportEvent()"><?php echo translate('Export') ?></a></div>
</div> </div>
<div id="eventStream"> <div id="eventVideo" class="">
<?php
if ( $event['DefaultVideo'] )
{
?>
<div id="videoFeed">
<video id="videoobj" class="video-js vjs-default-skin" width="<?php echo reScale( $event['Width'], $scale ) ?>" height="<?php echo reScale( $event['Height'], $scale ) ?>" data-setup='{ "controls": true, "playbackRates": [0.5, 1, 1.5, 2, 4, 8, 16, 32, 64, 128, 256], "autoplay": true, "preload": "auto", "plugins": { "zoomrotate": { "rotate": "<?php echo $Rotation ?>", "zoom": "<?php echo $Zoom ?>"}}}'>
<source src="<?php echo getEventDefaultVideoPath($event) ?>" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
<!--script>includeVideoJs();</script-->
<link href="//vjs.zencdn.net/4.11/video-js.css" rel="stylesheet">
<script src="//vjs.zencdn.net/4.11/video.js"></script>
<script src="./js/videojs.zoomrotate.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.min.js"></script>
<script>
var LabelFormat = "<?php echo validJsStr($event['LabelFormat'])?>";
var monitorName = "<?php echo validJsStr($event['MonitorName'])?>";
var duration = <?php echo $event['Length'] ?>, startTime = '<?php echo $event['StartTime'] ?>';
addVideoTimingTrack(document.getElementById('videoobj'), LabelFormat, monitorName, duration, startTime);
</script>
<?php
}
else
{
?>
<div id="imageFeed"> <div id="imageFeed">
<?php <?php
if ( ZM_WEB_STREAM_METHOD == 'mpeg' && ZM_MPEG_LIVE_FORMAT ) if ( ZM_WEB_STREAM_METHOD == 'mpeg' && ZM_MPEG_LIVE_FORMAT )
@ -187,15 +215,23 @@ else
<div class="progressBox" id="progressBox<?php echo $i ?>" title=""></div> <div class="progressBox" id="progressBox<?php echo $i ?>" title=""></div>
<?php <?php
} }
?>
</div>
<?php
}
?> ?>
</div> </div>
</div> </div>
<?php
if ($event['SaveJPEGs'] & 3)
{
?>
<div id="eventStills" class="hidden"> <div id="eventStills" class="hidden">
<div id="eventThumbsPanel"> <div id="eventThumbsPanel">
<div id="eventThumbs"> <div id="eventThumbs">
</div> </div>
</div> </div>
<div id="eventImagePanel" class="hidden"> <div id="eventImagePanel">
<div id="eventImageFrame"> <div id="eventImageFrame">
<img id="eventImage" src="graphics/transparent.gif" alt=""/> <img id="eventImage" src="graphics/transparent.gif" alt=""/>
<div id="eventImageBar"> <div id="eventImageBar">
@ -224,7 +260,10 @@ else
</div> </div>
</div> </div>
</div> </div>
</div> <?php
}
}
?>
</div> </div>
</body> </body>
</html> </html>

View File

@ -88,12 +88,16 @@ xhtmlHeaders(__FILE__, translate('Frame')." - ".$Event->Id()." - ".$Frame->Frame
</div> </div>
<div id="content"> <div id="content">
<p id="image"> <p id="image">
<?php if ( $imageData['hasAnalImage'] ) { ?> <?php if ( in_array($event['VideoWriter'],array("1","2")) ) { ?>
<img src="?view=image-ffmpeg&eid=<?php echo $event['Id'] ?>&fid=<?php echo $frame['FrameId'] ?>&scale=<?php echo $event['DefaultScale'] ?>" class="<?php echo $imageData['imageClass'] ?>">
<?php } else {
if ( $imageData['hasAnalImage'] ) { ?>
<a href="?view=frame&amp;eid=<?php echo $Event->Id() ?>&amp;fid=<?php echo $Frame->FrameId() ?>&amp;scale=<?php echo $scale ?>&amp;show=<?php echo $imageData['isAnalImage']?"capt":"anal" ?>"> <a href="?view=frame&amp;eid=<?php echo $Event->Id() ?>&amp;fid=<?php echo $Frame->FrameId() ?>&amp;scale=<?php echo $scale ?>&amp;show=<?php echo $imageData['isAnalImage']?"capt":"anal" ?>">
<?php } ?> <?php } ?>
<img id="frameImg" src="<?php echo $Frame->getImageSrc($imageData['isAnalImage']?'analyse':'capture') ?>" width="<?php echo reScale( $Event->Width(), $Event->DefaultScale(), $scale ) ?>" height="<?php echo reScale( $Event->Height(), $Event->DefaultScale(), $scale ) ?>" alt="<?php echo $Frame->EventId()."-".$Frame->FrameId() ?>" class="<?php echo $imageData['imageClass'] ?>"/> <img id="frameImg" src="<?php echo $Frame->getImageSrc($imageData['isAnalImage']?'analyse':'capture') ?>" width="<?php echo reScale( $Event->Width(), $Event->DefaultScale(), $scale ) ?>" height="<?php echo reScale( $Event->Height(), $Event->DefaultScale(), $scale ) ?>" alt="<?php echo $Frame->EventId()."-".$Frame->FrameId() ?>" class="<?php echo $imageData['imageClass'] ?>"/>
<?php if ( $imageData['hasAnalImage'] ) { ?></a><?php } ?> <?php if ( $imageData['hasAnalImage'] ) { ?></a><?php } ?>
<?php } ?>
</p>
<p id="controls"> <p id="controls">
<?php if ( $Frame->FrameId() > 1 ) { ?> <?php if ( $Frame->FrameId() > 1 ) { ?>
<a id="firstLink" href="?view=frame&amp;eid=<?php echo $Event->Id() ?>&amp;fid=<?php echo $firstFid ?>&amp;scale=<?php echo $scale ?>"><?php echo translate('First') ?></a> <a id="firstLink" href="?view=frame&amp;eid=<?php echo $Event->Id() ?>&amp;fid=<?php echo $firstFid ?>&amp;scale=<?php echo $scale ?>"><?php echo translate('First') ?></a>

View File

@ -0,0 +1,76 @@
<?php
//
// ZoneMinder web frame view file, $Date$, $Revision$
// Copyright (C) 2001-2008 Philip Coombes
//
// This program is free software; you can redistribute it and/or
// modify it under the terms of the GNU General Public License
// as published by the Free Software Foundation; either version 2
// of the License, or (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
//
if ( !canView( 'Events' ) )
{
$view = "error";
return;
}
$eid = validInt($_REQUEST['eid']);
if ( !empty($_REQUEST['fid']) )
$fid = validInt($_REQUEST['fid']);
$sql = 'SELECT E.*,M.Name AS MonitorName,M.DefaultScale,M.VideoWriter,M.Orientation FROM Events AS E INNER JOIN Monitors AS M ON E.MonitorId = M.Id WHERE E.Id = ?';
$event = dbFetchOne( $sql, NULL, array($eid) );
if ( !empty($fid) ) {
$sql = 'SELECT * FROM Frames WHERE EventId = ? AND FrameId = ?';
if ( !($frame = dbFetchOne( $sql, NULL, array($eid, $fid) )) )
$frame = array( 'FrameId'=>$fid, 'Type'=>'Normal', 'Score'=>0 );
} else {
$frame = dbFetchOne( 'SELECT * FROM Frames WHERE EventId = ? AND Score = ?', NULL, array( $eid, $event['MaxScore'] ) );
}
$maxFid = $event['Frames'];
$firstFid = 1;
$prevFid = $frame['FrameId']-1;
$nextFid = $frame['FrameId']+1;
$lastFid = $maxFid;
$alarmFrame = $frame['Type']=='Alarm';
if ( isset( $_REQUEST['scale'] ) )
$scale = validInt($_REQUEST['scale']);
else
$scale = max( reScale( SCALE_BASE, $event['DefaultScale'], ZM_WEB_DEFAULT_SCALE ), SCALE_BASE );
$Transpose = '';
if ( $event['VideoWriter'] == "2" ) {
$Rotation = $event['Orientation'];
// rotate right
if ( in_array($event['Orientation'],array("90")))
$Transpose = 'transpose=1,';
// rotate 180 // upside down cam
if ( in_array($event['Orientation'],array("180")))
$Transpose = 'transpose=2,transpose=2,';
// rotate left
if ( in_array($event['Orientation'],array("270")))
$Transpose = 'transpose=2,';
}
$focusWindow = true;
$Scale = 100/$scale;
$fid = $fid - 1;
#$command = 'ffmpeg -v 0 -i '.getEventDefaultVideoPath($event).' -vf "select=gte(selected_n\,'.$fid.'),setpts=PTS-STARTPTS" '.$Transpose.',scale=iw/'.$Scale.':-1" -frames:v 1 -f mjpeg -';
$command = 'ffmpeg -v 0 -i '.getEventDefaultVideoPath($event).' -vf "select=gte(n\\,'.$fid.'),setpts=PTS-STARTPTS,'.$Transpose.'scale=iw/'.$Scale.':-1" -f image2 -';
header('Content-Type: image/jpeg');
system($command);
?>

File diff suppressed because it is too large Load Diff

View File

@ -1,137 +1,156 @@
var events = new Object(); var events = {};
function showEvent( eid, fid, width, height ) function showEvent( eid, fid, width, height ) {
{ var url = '?view=event&eid='+eid+'&fid='+fid;
var url = '?view=event&eid='+eid+'&fid='+fid; url += filterQuery;
url += filterQuery; var pop=createPopup( url, 'zmEvent', 'event', width, height );
createPopup( url, 'zmEvent', 'event', width, height ); pop.vid=$('preview');
//video element is blocking video elements elsewhere in chrome possible interaction with mouseover event?
//FIXME unless an exact cause can be determined should store all video controls and do something to the other controls when we want to load a new video seek etc or whatever may block
/*var vid= $('preview');
vid.oncanplay=null;
// vid.currentTime=vid.currentTime-0.1;
vid.pause();*/
} }
function createEventHtml( event, frame ) function createEventHtml( event, frame ) {
{ var eventHtml = new Element( 'div' );
var eventHtml = new Element( 'div' );
if ( event.Archived > 0 ) if ( event.Archived > 0 )
eventHtml.addClass( 'archived' ); eventHtml.addClass( 'archived' );
new Element( 'p' ).inject( eventHtml ).set( 'text', monitorNames[event.MonitorId] ); new Element( 'p' ).inject( eventHtml ).set( 'text', monitors[event.MonitorId].Name );
new Element( 'p' ).inject( eventHtml ).set( 'text', event.Name+(frame?("("+frame.FrameId+")"):"") ); new Element( 'p' ).inject( eventHtml ).set( 'text', event.Name+(frame?("("+frame.FrameId+")"):"") );
new Element( 'p' ).inject( eventHtml ).set( 'text', event.StartTime+" - "+event.Length+"s" ); new Element( 'p' ).inject( eventHtml ).set( 'text', event.StartTime+" - "+event.Length+"s" );
new Element( 'p' ).inject( eventHtml ).set( 'text', event.Cause ); new Element( 'p' ).inject( eventHtml ).set( 'text', event.Cause );
if ( event.Notes ) if ( event.Notes )
new Element( 'p' ).inject( eventHtml ).set( 'text', event.Notes ); new Element( 'p' ).inject( eventHtml ).set( 'text', event.Notes );
if ( event.Archived > 0 ) if ( event.Archived > 0 )
new Element( 'p' ).inject( eventHtml ).set( 'text', archivedString ); new Element( 'p' ).inject( eventHtml ).set( 'text', archivedString );
return( eventHtml ); return( eventHtml );
} }
function showEventDetail( eventHtml ) function showEventDetail( eventHtml ) {
{ $('instruction').addClass( 'hidden' );
$('instruction').addClass( 'hidden' ); $('eventData').empty();
$('eventData').empty(); $('eventData').adopt( eventHtml );
$('eventData').adopt( eventHtml ); $('eventData').removeClass( 'hidden' );
$('eventData').removeClass( 'hidden' );
} }
function eventDataResponse( respObj, respText ) function eventDataResponse( respObj, respText ) {
{ var event = respObj.event;
var event = respObj.event; if ( !event ) {
if ( !event ) console.log( "Null event" );
{ return;
console.log( "Null event" ); }
return; events[event.Id] = event;
}
events[event.Id] = event;
if ( respObj.loopback ) if ( respObj.loopback ) {
{ requestFrameData( event.Id, respObj.loopback );
requestFrameData( event.Id, respObj.loopback ); }
}
} }
function frameDataResponse( respObj, respText ) function frameDataResponse( respObj, respText ) {
{ var frame = respObj.frameimage;
var frame = respObj.frameimage; if ( !frame.FrameId ) {
if ( !frame.FrameId ) console.log( "Null frame" );
{ return;
console.log( "Null frame" ); }
return;
}
var event = events[frame.EventId]; var event = events[frame.EventId];
if ( !event ) if ( !event ) {
{ console.error( "No event "+frame.eventId+" found" );
console.error( "No event "+frame.eventId+" found" ); return;
return; }
}
if ( !event['frames'] ) if ( !event['frames'] )
event['frames'] = new Object(); event['frames'] = new Object();
event['frames'][frame.FrameId] = frame; event['frames'][frame.FrameId] = frame;
event['frames'][frame.FrameId]['html'] = createEventHtml( event, frame ); event['frames'][frame.FrameId]['html'] = createEventHtml( event, frame );
showEventDetail( event['frames'][frame.FrameId]['html'] );
loadEventImage( frame.Image.imagePath, event.Id, frame.FrameId, event.Width, event.Height ); previewEvent(frame.EventId, frame.FrameId);
} }
var eventQuery = new Request.JSON( { url: thisUrl, method: 'get', timeout: AJAX_TIMEOUT, link: 'cancel', onSuccess: eventDataResponse } ); var eventQuery = new Request.JSON( { url: thisUrl, method: 'get', timeout: AJAX_TIMEOUT, link: 'cancel', onSuccess: eventDataResponse } );
var frameQuery = new Request.JSON( { url: thisUrl, method: 'get', timeout: AJAX_TIMEOUT, link: 'cancel', onSuccess: frameDataResponse } ); var frameQuery = new Request.JSON( { url: thisUrl, method: 'get', timeout: AJAX_TIMEOUT, link: 'cancel', onSuccess: frameDataResponse } );
function requestFrameData( eventId, frameId ) function requestFrameData( eventId, frameId ) {
{ if ( !events[eventId] ) {
if ( !events[eventId] ) eventQuery.options.data = "view=request&request=status&entity=event&id="+eventId+"&loopback="+frameId;
{ eventQuery.send();
eventQuery.options.data = "view=request&request=status&entity=event&id="+eventId+"&loopback="+frameId; } else {
eventQuery.send(); frameQuery.options.data = "view=request&request=status&entity=frameimage&id[0]="+eventId+"&id[1]="+frameId;
} frameQuery.send();
else }
{
frameQuery.options.data = "view=request&request=status&entity=frameimage&id[0]="+eventId+"&id[1]="+frameId;
frameQuery.send();
}
} }
function previewEvent( eventId, frameId ) function previewEvent( eventId, frameId ) {
{
if ( events[eventId] ) if ( events[eventId] ) {
{ var event = events[eventId];
if ( events[eventId]['frames'] ) if ( event['frames'] ) {
{ if ( event['frames'][frameId] ) {
if ( events[eventId]['frames'][frameId] ) showEventDetail( event['frames'][frameId]['html'] );
{ var imagePath = event.frames[frameId].Image.imagePath;
showEventDetail( events[eventId]['frames'][frameId]['html'] ); var videoName = event.DefaultVideo;
loadEventImage( events[eventId].frames[frameId].Image.imagePath, eventId, frameId, events[eventId].Width, events[eventId].Height ); loadEventImage( imagePath, eventId, frameId, event.Width, event.Height, event.Frames/event.Length, videoName, event.Length, event.StartTime, monitors[event.MonitorId]);
return; return;
} }
}
} }
requestFrameData( eventId, frameId ); }
requestFrameData( eventId, frameId );
} }
function loadEventImage( imagePath, eid, fid, width, height ) function loadEventImage( imagePath, eid, fid, width, height, fps, videoName, duration, startTime, Monitor ) {
{ var vid= $('preview');
var imageSrc = $('imageSrc'); var imageSrc = $('imageSrc');
if(videoName) {
vid.show();
imageSrc.hide();
var newsource=imagePrefix+imagePath.slice(0,imagePath.lastIndexOf('/'))+"/"+videoName;
//console.log(newsource);
//console.log(sources[0].src.slice(-newsource.length));
if(newsource!=vid.currentSrc.slice(-newsource.length) || vid.readyState==0) {
//console.log("loading new");
//it is possible to set a long source list here will that be unworkable?
var sources = vid.getElementsByTagName('source');
sources[0].src=newsource;
var tracks = vid.getElementsByTagName('track');
if(tracks.length){
tracks[0].parentNode.removeChild(tracks[0]);
}
vid.load();
addVideoTimingTrack(vid, Monitor.LabelFormat, Monitor.Name, duration, startTime)
vid.currentTime = fid/fps;
} else {
if(!vid.seeking)
vid.currentTime=fid/fps;
}
} else {
vid.hide();
imageSrc.show();
imageSrc.setProperty( 'src', imagePrefix+imagePath ); imageSrc.setProperty( 'src', imagePrefix+imagePath );
imageSrc.removeEvent( 'click' ); imageSrc.removeEvent( 'click' );
imageSrc.addEvent( 'click', showEvent.pass( [eid, fid, width, height] ) ); imageSrc.addEvent( 'click', showEvent.pass( [ eid, fid, width, height ] ) );
var eventData = $('eventData'); }
eventData.removeEvent( 'click' );
eventData.addEvent( 'click', showEvent.pass( [eid, fid, width, height] ) ); var eventData = $('eventData');
eventData.removeEvent( 'click' );
eventData.addEvent( 'click', showEvent.pass( [eid, fid, width, height] ) );
} }
function tlZoomBounds( minTime, maxTime ) function tlZoomBounds( minTime, maxTime ) {
{ console.log( "Zooming" );
console.log( "Zooming" ); window.location = '?view='+currentView+filterQuery+'&minTime='+minTime+'&maxTime='+maxTime;
window.location = '?view='+currentView+filterQuery+'&minTime='+minTime+'&maxTime='+maxTime;
} }
function tlZoomRange( midTime, range ) function tlZoomRange( midTime, range ) {
{ window.location = '?view='+currentView+filterQuery+'&midTime='+midTime+'&range='+range;
window.location = '?view='+currentView+filterQuery+'&midTime='+midTime+'&range='+range;
} }
function tlPan( midTime, range ) function tlPan( midTime, range ) {
{ window.location = '?view='+currentView+filterQuery+'&midTime='+midTime+'&range='+range;
window.location = '?view='+currentView+filterQuery+'&midTime='+midTime+'&range='+range;
} }

View File

@ -1,16 +1,21 @@
var filterQuery = '<?php echo validJsStr($filterQuery) ?>'; var filterQuery = '<?php echo validJsStr($filterQuery) ?>';
var monitorNames = new Object();
<?php <?php
$jsMonitors = array();
$fields = array('Name', 'LabelFormat', 'SaveJPEGs', 'VideoWriter');
foreach ( $monitors as $monitor ) foreach ( $monitors as $monitor )
{ {
if ( !empty($monitorIds[$monitor['Id']]) ) if ( !empty($monitorIds[$monitor['Id']]) )
{ {
?> $jsMonitor = array();
monitorNames[<?php echo $monitor['Id'] ?>] = '<?php echo validJsStr($monitor['Name']) ?>'; foreach ($fields as $field)
<?php {
$jsMonitor[$field] = $monitor[$field];
}
$jsMonitors[$monitor['Id']] = $jsMonitor;
} }
} }
?> ?>
var monitors = <?php echo json_encode($jsMonitors) ?>;
var archivedString = "<?php echo translate('Archived') ?>"; var archivedString = "<?php echo translate('Archived') ?>";

View File

@ -29,6 +29,7 @@ if ( !canView( 'Monitors' ) )
$tabs = array(); $tabs = array();
$tabs["general"] = translate('General'); $tabs["general"] = translate('General');
$tabs["source"] = translate('Source'); $tabs["source"] = translate('Source');
$tabs["storage"] = translate('Storage');
$tabs["timestamp"] = translate('Timestamp'); $tabs["timestamp"] = translate('Timestamp');
$tabs["buffers"] = translate('Buffers'); $tabs["buffers"] = translate('Buffers');
if ( ZM_OPT_CONTROL && canView( 'Control' ) ) if ( ZM_OPT_CONTROL && canView( 'Control' ) )
@ -105,6 +106,10 @@ function getMonitorObject( $mid = null)
'Orientation' => "0", 'Orientation' => "0",
'Deinterlacing' => 0, 'Deinterlacing' => 0,
'RTSPDescribe' => 0, 'RTSPDescribe' => 0,
'SaveJPEGs' => "3",
'VideoWriter' => "0",
'EncoderParameters' => "# Lines beginning with # are a comment \n# For changing quality, use the crf option\n# 1 is best, 51 is worst quality\n#crf=23\n",
'RecordAudio' => "0",
'LabelFormat' => '%N - %d/%m/%y %H:%M:%S', 'LabelFormat' => '%N - %d/%m/%y %H:%M:%S',
'LabelX' => 0, 'LabelX' => 0,
'LabelY' => 0, 'LabelY' => 0,
@ -464,6 +469,20 @@ $label_size = array(
"Large" => 2 "Large" => 2
); );
$savejpegopts = array(
"Disabled" => 0,
"Frames only" => 1,
"Analysis images only (if available)" => 2,
"Frames + Analysis images (if available)" => 3,
"Snapshot Only" => 4
);
$videowriteropts = array(
"Disabled" => 0,
"X264 Encode" => 1,
"H264 Camera Passthrough" => 2
);
xhtmlHeaders(__FILE__, translate('Monitor')." - ".validHtmlStr($monitor['Name']) ); xhtmlHeaders(__FILE__, translate('Monitor')." - ".validHtmlStr($monitor['Name']) );
?> ?>
<body> <body>
@ -607,6 +626,15 @@ if ( $tab != 'source' )
<input type="hidden" name="newMonitor[Deinterlacing]" value="<?php echo validHtmlStr($newMonitor['Deinterlacing']) ?>"/> <input type="hidden" name="newMonitor[Deinterlacing]" value="<?php echo validHtmlStr($newMonitor['Deinterlacing']) ?>"/>
<?php <?php
} }
if ( $tab != 'storage' )
{
?>
<input type="hidden" name="newMonitor[SaveJPEGs]" value="<?php echo validHtmlStr($newMonitor['SaveJPEGs']) ?>"/>
<input type="hidden" name="newMonitor[VideoWriter]" value="<?php echo validHtmlStr($newMonitor['VideoWriter']) ?>"/>
<input type="hidden" name="newMonitor[EncoderParameters]" value="<?php echo validHtmlStr($newMonitor['EncoderParameters']) ?>"/>
<input type="hidden" name="newMonitor[RecordAudio]" value="<?php echo validHtmlStr($newMonitor['RecordAudio']) ?>"/>
<?php
}
if ( $tab != 'source' || ($newMonitor['Type'] != 'Remote' && $newMonitor['Protocol'] != 'RTSP')) if ( $tab != 'source' || ($newMonitor['Type'] != 'Remote' && $newMonitor['Protocol'] != 'RTSP'))
{ {
?> ?>
@ -898,6 +926,14 @@ switch ( $tab )
<?php <?php
break; break;
} }
case 'storage' :
?>
<tr><td><?php echo translate('SaveJPEGs') ?></td><td><select name="newMonitor[SaveJPEGs]"><?php foreach ( $savejpegopts as $name => $value ) { ?><option value="<?php echo $value ?>"<?php if ( $value == $newMonitor['SaveJPEGs'] ) { ?> selected="selected"<?php } ?>><?php echo $name ?></option><?php } ?></select></td></tr>
<tr><td><?php echo translate('VideoWriter') ?></td><td><select name="newMonitor[VideoWriter]"><?php foreach ( $videowriteropts as $name => $value ) { ?><option value="<?php echo $value ?>"<?php if ( $value == $newMonitor['VideoWriter'] ) { ?> selected="selected"<?php } ?>><?php echo $name ?></option><?php } ?></select></td></tr>
<tr><td><?php echo translate('OptionalEncoderParam') ?></td><td><textarea name="newMonitor[EncoderParameters]" rows="4" cols="36"><?php echo validHtmlStr($newMonitor['EncoderParameters']) ?></textarea></td></tr>
<tr><td><?php echo translate('RecordAudio') ?></td><td><input type="checkbox" name="newMonitor[RecordAudio]" value="1"<?php if ( !empty($newMonitor['RecordAudio']) ) { ?> checked="checked"<?php } ?>/></td></tr>
<?php
break;
case 'timestamp' : case 'timestamp' :
{ {
?> ?>

View File

@ -143,11 +143,11 @@ foreach( dbFetchAll( $monitorsSql ) as $row )
} }
$rangeSql = "select min(E.StartTime) as MinTime, max(E.EndTime) as MaxTime from Events as E inner join Monitors as M on (E.MonitorId = M.Id) where not isnull(E.StartTime) and not isnull(E.EndTime)"; $rangeSql = "select min(E.StartTime) as MinTime, max(E.EndTime) as MaxTime from Events as E inner join Monitors as M on (E.MonitorId = M.Id) where not isnull(E.StartTime) and not isnull(E.EndTime)";
$eventsSql = "select E.Id,E.Name,E.StartTime,E.EndTime,E.Length,E.Frames,E.MaxScore,E.Cause,E.Notes,E.Archived,E.MonitorId from Events as E inner join Monitors as M on (E.MonitorId = M.Id) where not isnull(StartTime)"; $eventsSql = "SELECT * FROM Events AS E WHERE NOT isnull(StartTime)";
if ( !empty($user['MonitorIds']) ) if ( !empty($user['MonitorIds']) )
{ {
$monFilterSql = ' AND M.Id IN ('.$user['MonitorIds'].')'; $monFilterSql = ' AND E.MonitorId IN ('.$user['MonitorIds'].')';
$rangeSql .= $monFilterSql; $rangeSql .= $monFilterSql;
$eventsSql .= $monFilterSql; $eventsSql .= $monFilterSql;
@ -309,7 +309,7 @@ $midTime = strftime( STRF_FMT_DATETIME_DB, $midTimeT );
if ( isset($minTime) && isset($maxTime) ) if ( isset($minTime) && isset($maxTime) )
{ {
$eventsSql .= " and E.EndTime >= '$minTime' and E.StartTime <= '$maxTime'"; $eventsSql .= " and EndTime >= '$minTime' and StartTime <= '$maxTime'";
} }
$eventsSql .= " order by Id asc"; $eventsSql .= " order by Id asc";
@ -811,7 +811,18 @@ xhtmlHeaders(__FILE__, translate('Timeline') );
<div id="content" class="chartSize"> <div id="content" class="chartSize">
<div id="topPanel" class="graphWidth"> <div id="topPanel" class="graphWidth">
<div id="imagePanel"> <div id="imagePanel">
<div id="image" class="imageHeight"><img id="imageSrc" class="imageWidth" src="graphics/transparent.gif" alt="<?php echo translate('ViewEvent') ?>" title="<?php echo translate('ViewEvent') ?>"/></div> <div id="image" class="imageHeight">
<img id="imageSrc" class="imageWidth" src="graphics/transparent.gif" alt="<?php echo translate('ViewEvent') ?>" title="<?php echo translate('ViewEvent') ?>"/>
<?php
//due to chrome bug, has to enable https://code.google.com/p/chromium/issues/detail?id=472300
//crossorigin has to be added below to make caption work in chrome
?>
<video id="preview" width="100%" controls crossorigin="anonymous">
<source src="<?php echo getEventDefaultVideoPath($event); ?>" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
</div> </div>
<div id="dataPanel"> <div id="dataPanel">
<div id="textPanel"> <div id="textPanel">
@ -921,6 +932,10 @@ foreach( array_keys($monEventSlots) as $monitorId )
<?php <?php
unset( $currEventSlots ); unset( $currEventSlots );
$currEventSlots = &$monEventSlots[$monitorId]; $currEventSlots = &$monEventSlots[$monitorId];
$monitorMouseover = $mouseover;
if ($monitors[$monitorId]['SaveJPEGs'] == 2) {
$monitorMouseover = false;
}
for ( $i = 0; $i < $chart['graph']['width']; $i++ ) for ( $i = 0; $i < $chart['graph']['width']; $i++ )
{ {
if ( isset($currEventSlots[$i]) ) if ( isset($currEventSlots[$i]) )
@ -928,7 +943,7 @@ foreach( array_keys($monEventSlots) as $monitorId )
unset( $slot ); unset( $slot );
$slot = &$currEventSlots[$i]; $slot = &$currEventSlots[$i];
if ( $mouseover ) if ( $monitorMouseover )
{ {
$behaviours = array( $behaviours = array(
'onclick="'.getSlotShowEventBehaviour( $slot ).'"', 'onclick="'.getSlotShowEventBehaviour( $slot ).'"',
@ -968,5 +983,6 @@ foreach( array_keys($monEventSlots) as $monitorId )
</div> </div>
</div> </div>
</div> </div>
<script src="//cdnjs.cloudflare.com/ajax/libs/moment.js/2.10.6/moment.min.js"></script>
</body> </body>
</html> </html>

106
web/views/view_video.php Normal file
View File

@ -0,0 +1,106 @@
<?php
//
// ZoneMinder web video view file, $Date: 2008-09-29 14:15:13 +0100 (Mon, 29 Sep 2008) $, $Revision: 2640 $
// Copyright (C) 2001-2008 Philip Coombes
//
// This program is free software; you can redistribute it and/or
// modify it under the terms of the GNU General Public License
// as published by the Free Software Foundation; either version 2
// of the License, or (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
//
// Calling sequence: ... /zm/index.php?view=video&event_id=123
//
// event_id is the id of the event to view
//
// Does not support scaling at this time.
//
if ( !canView( 'Events' ) ) {
$view = "error";
return;
}
require_once('includes/Event.php');
$errorText = false;
$path = '';
if ( ! empty($_REQUEST['eid'] ) ) {
$Event = new Event( $_REQUEST['eid'] );
$path = $Event->Path().'/'.$Event->DefaultVideo();
Debug("Path: $path");
} else {
$errorText = "No video path";
}
if ( $errorText ) {
Error( $errorText );
header ("HTTP/1.0 404 Not Found");
die();
}
$size = filesize($path);
$fh = @fopen($path,'rb');
if ( ! $fh ) {
header ("HTTP/1.0 404 Not Found");
die();
}
$begin = 0;
$end = $size-1;
$length = $size;
if ( isset( $_SERVER['HTTP_RANGE'] ) ) {
Debug("Using Range " . $_SERVER['HTTP_RANGE'] );
if ( preg_match( '/bytes=\h*(\d+)-(\d*)[\D.*]?/i', $_SERVER['HTTP_RANGE'], $matches) ) {
$begin = intval( $matches[1] );
if ( ! empty( $matches[2]) ) {
$end = intval( $matches[2] );
}
$length = $end - $begin + 1;
Debug("Using Range $begin $end size: $size, length: $length");
}
} # end if HTTP_RANGE
header('Content-type: video/mp4');
header('Accept-Ranges: bytes');
header('Content-Length: '.$length);
header("Content-Disposition: inline;");
if ( $begin > 0 || $end < $size-1 ) {
header('HTTP/1.0 206 Partial Content');
header("Content-Range: bytes $begin-$end/$size");
header("Content-Transfer-Encoding: binary\n");
header('Connection: close');
} else {
header('HTTP/1.0 200 OK');
}
// Apparently without these we get a few extra bytes of output at the end...
ob_clean();
flush();
$cur = $begin;
fseek( $fh, $begin, 0 );
while( $length && ( ! feof( $fh ) ) && ( connection_status() == 0 ) ) {
$amount = min( 1024*16, $length );
print fread( $fh, $amount );
$length -= $amount;
usleep(100);
}
fclose( $fh );
exit();

View File

@ -42,6 +42,12 @@
#cmakedefine HAVE_GNUTLS_GNUTLS_H 1 #cmakedefine HAVE_GNUTLS_GNUTLS_H 1
#cmakedefine HAVE_LIBMYSQLCLIENT 1 #cmakedefine HAVE_LIBMYSQLCLIENT 1
#cmakedefine HAVE_MYSQL_H 1 #cmakedefine HAVE_MYSQL_H 1
#cmakedefine HAVE_LIBX264 1
#cmakedefine HAVE_X264_H 1
#cmakedefine HAVE_LIBMP4V2 1
#cmakedefine HAVE_MP4V2_MP4V2_H 1
#cmakedefine HAVE_MP4V2_H 1
#cmakedefine HAVE_MP4_H 1
#cmakedefine HAVE_LIBAVFORMAT 1 #cmakedefine HAVE_LIBAVFORMAT 1
#cmakedefine HAVE_LIBAVFORMAT_AVFORMAT_H 1 #cmakedefine HAVE_LIBAVFORMAT_AVFORMAT_H 1
#cmakedefine HAVE_LIBAVCODEC 1 #cmakedefine HAVE_LIBAVCODEC 1