aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authordec05eba <dec05eba@protonmail.com>2022-10-03 16:54:48 +0200
committerdec05eba <dec05eba@protonmail.com>2022-10-03 16:56:58 +0200
commit4cd391e07e660b164742b357299fab6ca565807b (patch)
tree833692c5fc26f412c7ab13d5eca1d746549fc8f3
parent9ff0bb199be163d6c11266c64134e4e38ab6115d (diff)
Add info about flatpak package, default to h264 (unless resolution is greater than 3840x2160) and add -k option to set codec
-rw-r--r--README.md14
-rw-r--r--TODO3
-rw-r--r--src/main.cpp58
3 files changed, 57 insertions, 18 deletions
diff --git a/README.md b/README.md
index 0d2c49c..867c15c 100644
--- a/README.md
+++ b/README.md
@@ -18,7 +18,8 @@ Using NvFBC (recording the monitor/screen) is not faster than not using NvFBC (r
# Installation
If you are running an Arch Linux based distro, then you can find gpu screen recorder on aur under the name gpu-screen-recorder-git (`yay -S gpu-screen-recorder-git`).\
If you are running an Ubuntu based distro then run `install_ubuntu.sh` as root: `sudo ./install_ubuntu.sh`.\
-If you are running another distro then you can run `install.sh` as root: `sudo ./install.sh`, but you need to manually install the dependencies, as described below.
+If you are running another distro then you can run `install.sh` as root: `sudo ./install.sh`, but you need to manually install the dependencies, as described below.\
+You can also install gpu screen recorder ([the gtk gui version](https://git.dec05eba.com/gpu-screen-recorder-gtk/)) from flathub: https://flathub.org/apps/details/com.dec05eba.gpu_screen_recorder.
# Dependencies
`libgl (libglvnd), ffmpeg, libx11, libxcomposite, libpulse`. You need to additionally have `cuda` installed when you run `gpu-screen-recorder`.\
@@ -28,8 +29,11 @@ Recording monitors requires a gpu with NvFBC support (note: this is not required
Run `scripts/interactive.sh` or run gpu-screen-recorder directly, for example: `gpu-screen-recorder -w $(xdotool selectwindow) -c mp4 -f 60 -a "$(pactl get-default-sink).monitor" -o test_video.mp4`\
Then stop the screen recorder with Ctrl+C, which will also save the recording.\
Send signal SIGUSR1 (`killall -SIGUSR1 gpu-screen-recorder`) to gpu-screen-recorder when in replay mode to save the replay. The paths to the saved files is output to stdout after the recording is saved.\
-You can find the default output audio device (headset, speakers) with the command `pactl get-default-sink`. Add `monitor` to the end of that to use that as an audio input in gpu-screen-recorder.\
+You can find the default output audio device (headset, speakers (in other words, desktop audio)) with the command `pactl get-default-sink`. Add `monitor` to the end of that to use that as an audio input in gpu-screen-recorder.\
You can find the default input audio device (microphone) with the command `pactl get-default-source`. This input should not have `monitor` added to the end when used in gpu-screen-recorder.\
+Example of recording both desktop audio and microphone: `gpu-screen-recorder -w $(xdotool selectwindow) -c mp4 -f 60 -a "$(pactl get-default-sink).monitor" -a "$(pactl get-default-source)" -o test_video.mp4`.\
+Note that if you use multiple audio inputs then they are each recorded into separate audio tracks in the video file. There is currently no option to merge audio tracks, but it's a planned feature.
+
There is also a gui for the gpu-screen-recorder called [gpu-screen-recorder-gtk](https://git.dec05eba.com/gpu-screen-recorder-gtk/).
# Demo
@@ -37,8 +41,10 @@ There is also a gui for the gpu-screen-recorder called [gpu-screen-recorder-gtk]
# FAQ
## How is this different from using OBS with nvenc?
-OBS only uses the gpu for video encoding, but the window image that is encoded is sent from the GPU to the CPU and then back to the GPU. These operations are very slow and causes all of the fps drops when using OBS. OBS only uses the GPU efficiently on Windows 10 and Nvidia.\
+OBS only uses the gpu for video encoding, but the window image that is encoded is copied from the GPU to the CPU and then back to the GPU (video encoding unit). These operations are very slow and causes all of the fps drops when using OBS. OBS only uses the GPU efficiently on Windows 10 and Nvidia.\
This gpu-screen-recorder keeps the window image on the GPU and sends it directly to the video encoding unit on the GPU by using CUDA. This means that CPU usage remains at around 0% when using this screen recorder.
+## How is this different from using OBS NvFBC plugin?
+The plugin does everything on the GPU and gives the texture to OBS, but OBS does not know how to use the texture directly on the GPU so it copies the texture to the CPU and then back to the GPU (video encoding unit). These operations are very slow and causes a lot of fps drops unless you have a fast CPU. This is especially noticable when recording at higher resolutions than 1080p.
## How is this different from using FFMPEG with x11grab and nvenc?
FFMPEG only uses the GPU with CUDA when doing transcoding from an input video to an output video, and not when recording the screen when using x11grab. So FFMPEG has the same fps drop issues that OBS has.
@@ -49,4 +55,4 @@ libraries at compile-time.
* Dynamically change bitrate/resolution to match desired fps. This would be helpful when streaming for example, where the encode output speed also depends on upload speed to the streaming service.
* Show cursor when recording. Currently the cursor is not visible when recording a window and it's disabled when recording screen-direct to allow direct nvfbc capture for fullscreen windows, which allows for better performance and variable refresh rate monitors to work.
* Implement opengl injection to capture texture. This fixes composition issues and (VRR) without having to use NvFBC direct capture.
-* Always use direct capture with NvFBC once the capture issue in mpv fullscreen has been resolved.
+* Always use direct capture with NvFBC once the capture issue in mpv fullscreen has been resolved (maybe detect if direct capture fails in nvfbc and switch to non-direct recording. NvFBC says if direct capture fails).
diff --git a/TODO b/TODO
index 0b2a32b..1e88c53 100644
--- a/TODO
+++ b/TODO
@@ -3,7 +3,6 @@ Only add window to list if its the window is a topmost window.
Track window damages and only update then. That is better for output file size.
Getting the texture of a window when using a compositor is an nvidia specific limitation. When gpu-screen-recorder supports other gpus then this can be ignored.
Quickly changing workspace and back while recording under i3 breaks the screen recorder. i3 probably unmaps windows in other workspaces.
-Nvidia 515.57 supports nvfbc direct capture with mouse capture. Check if driver is equal or newer than this and use mouse capture in such situations (with direct capture) supports nvfbc direct capture with mouse capture.
See https://trac.ffmpeg.org/wiki/EncodingForStreamingSites for optimizing streaming.
Add option to merge audio tracks into one (muxing?) by adding multiple audio streams in one -a arg separated by comma.
Look at VK_EXT_external_memory_dma_buf.
@@ -13,4 +12,4 @@ Allow recording all monitors/selected monitor without nvfbc by recording the com
Allow recording a region by recording the compositor proxy window / nvfbc window and copying part of it.
Resizing the target window to be smaller than the initial size is buggy. The window texture ends up duplicated in the video.
Handle frames (especially for applications with rounded client-side decorations, such as gnome applications. They are huge).
-Use nvenc directly, which allows removing the use of cuda. \ No newline at end of file
+Use nvenc directly, which allows removing the use of cuda.
diff --git a/src/main.cpp b/src/main.cpp
index 9e8e685..4a470b4 100644
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -101,6 +101,11 @@ enum class VideoQuality {
ULTRA
};
+enum class VideoCodec {
+ H264,
+ H265
+};
+
static double clock_get_monotonic_seconds() {
struct timespec ts;
ts.tv_sec = 0;
@@ -592,15 +597,15 @@ static AVCodecContext* create_audio_codec_context(AVFormatContext *av_format_con
static AVCodecContext *create_video_codec_context(AVFormatContext *av_format_context,
VideoQuality video_quality,
int record_width, int record_height,
- int fps, bool use_hevc) {
- const AVCodec *codec = avcodec_find_encoder_by_name(use_hevc ? "hevc_nvenc" : "h264_nvenc");
+ int fps, VideoCodec video_codec) {
+ const AVCodec *codec = avcodec_find_encoder_by_name(video_codec == VideoCodec::H265 ? "hevc_nvenc" : "h264_nvenc");
if (!codec) {
- codec = avcodec_find_encoder_by_name(use_hevc ? "nvenc_hevc" : "nvenc_h264");
+ codec = avcodec_find_encoder_by_name(video_codec == VideoCodec::H265 ? "nvenc_hevc" : "nvenc_h264");
}
if (!codec) {
fprintf(
stderr,
- "Error: Could not find %s encoder\n", use_hevc ? "hevc" : "h264");
+ "Error: Could not find %s encoder\n", video_codec == VideoCodec::H265 ? "hevc" : "h264");
exit(1);
}
@@ -629,7 +634,7 @@ static AVCodecContext *create_video_codec_context(AVFormatContext *av_format_con
codec_context->max_b_frames = 0;
codec_context->pix_fmt = AV_PIX_FMT_CUDA;
codec_context->color_range = AVCOL_RANGE_JPEG;
- if(use_hevc)
+ if(video_codec == VideoCodec::H265)
codec_context->codec_tag = MKTAG('h', 'v', 'c', '1');
switch(video_quality) {
case VideoQuality::VERY_HIGH:
@@ -839,10 +844,11 @@ static void usage() {
fprintf(stderr, " -c Container format for output file, for example mp4, or flv.\n");
fprintf(stderr, " -f Framerate to record at.\n");
fprintf(stderr, " -a Audio device to record from (pulse audio device). Can be specified multiple times. Each time this is specified a new audio track is added for the specified audio device. A name can be given to the audio input device by prefixing the audio input with <name>/, for example \"dummy/alsa_output.pci-0000_00_1b.0.analog-stereo.monitor\". Optional, no audio track is added by default.\n");
- fprintf(stderr, " -q Video quality. Should either be 'very_high' or 'ultra'. 'very_high' is the recommended, especially when live streaming or when you have a slower harddrive. Optional, set to 'very_high' be default.\n");
+ fprintf(stderr, " -q Video quality. Should be either 'very_high' or 'ultra'. 'very_high' is the recommended, especially when live streaming or when you have a slower harddrive. Optional, set to 'very_high' be default.\n");
fprintf(stderr, " -r Replay buffer size in seconds. If this is set, then only the last seconds as set by this option will be stored"
" and the video will only be saved when the gpu-screen-recorder is closed. This feature is similar to Nvidia's instant replay feature."
" This option has be between 5 and 1200. Note that the replay buffer size will not always be precise, because of keyframes. Optional, disabled by default.\n");
+ fprintf(stderr, " -k Codec to use. Should be either 'h264' or 'h265'. Defaults to 'h264' unless recording at a higher resolution than 3840x2160. Forcefully set to 'h264' if -c is 'flv.\n");
fprintf(stderr, " -o The output file path. If omitted then the encoded data is sent to stdout. Required in replay mode (when using -r). In replay mode this has to be an existing directory instead of a file.\n");
fprintf(stderr, "NOTES:\n");
fprintf(stderr, " Send signal SIGINT (Ctrl+C) to gpu-screen-recorder to stop and save the recording (when not using replay mode).\n");
@@ -1076,7 +1082,8 @@ int main(int argc, char **argv) {
{ "-a", Arg { {}, true, true } },
{ "-q", Arg { {}, true, false } },
{ "-o", Arg { {}, true, false } },
- { "-r", Arg { {}, true, false } }
+ { "-r", Arg { {}, true, false } },
+ { "-k", Arg { {}, true, false } }
};
for(int i = 1; i < argc - 1; i += 2) {
@@ -1101,6 +1108,19 @@ int main(int argc, char **argv) {
}
}
+ VideoCodec video_codec;
+ const char *codec_to_use = args["-k"].value();
+ if(codec_to_use) {
+ if(strcmp(codec_to_use, "h264") == 0) {
+ video_codec = VideoCodec::H264;
+ } else if(strcmp(codec_to_use, "h265") == 0) {
+ video_codec = VideoCodec::H265;
+ } else {
+ fprintf(stderr, "Error: -k should either be either 'h264' or 'h265', got: '%s'\n", codec_to_use);
+ usage();
+ }
+ }
+
const Arg &audio_input_arg = args["-a"];
const std::vector<AudioInput> audio_inputs = get_pulseaudio_inputs();
std::vector<AudioInput> requested_audio_inputs;
@@ -1349,6 +1369,21 @@ int main(int argc, char **argv) {
window_pixmap.texture_height = window_height;
}
+ if(!codec_to_use) {
+ // h265 generally allows recording at a higher resolution than h264 on nvidia cards. On a gtx 1080 4k is the max resolution for h264 but for h265 it's 8k.
+ // Another important info is that when recording at a higher fps than.. 60? h265 has very bad performance. For example when recording at 144 fps the fps drops to 1
+ // while with h264 the fps doesn't drop.
+ if(window_width > 3840 || window_height > 2160) {
+ fprintf(stderr, "Info: using h265 encoder because a codec was not specified and resolution width is more than 3840 or height is more than 2160\n");
+ codec_to_use = "h265";
+ video_codec = VideoCodec::H265;
+ } else {
+ fprintf(stderr, "Info: using h264 encoder because a codec was not specified\n");
+ codec_to_use = "h264";
+ video_codec = VideoCodec::H264;
+ }
+ }
+
// Video start
AVFormatContext *av_format_context;
// The output format is automatically guessed by the file extension
@@ -1365,16 +1400,15 @@ int main(int argc, char **argv) {
const AVOutputFormat *output_format = av_format_context->oformat;
//bool use_hevc = strcmp(window_str, "screen") == 0 || strcmp(window_str, "screen-direct") == 0;
- bool use_hevc = true;
- if(use_hevc && strcmp(container_format, "flv") == 0) {
- use_hevc = false;
- fprintf(stderr, "Warning: hevc is not compatible with flv, falling back to h264 instead.\n");
+ if(video_codec != VideoCodec::H264 && strcmp(container_format, "flv") == 0) {
+ video_codec = VideoCodec::H264;
+ fprintf(stderr, "Warning: h265 is not compatible with flv, falling back to h264 instead.\n");
}
AVStream *video_stream = nullptr;
std::vector<AudioTrack> audio_tracks;
- AVCodecContext *video_codec_context = create_video_codec_context(av_format_context, quality, record_width, record_height, fps, use_hevc);
+ AVCodecContext *video_codec_context = create_video_codec_context(av_format_context, quality, record_width, record_height, fps, video_codec);
if(replay_buffer_size_secs == -1)
video_stream = create_stream(av_format_context, video_codec_context);